Test Report: KVM_Linux_crio 18259

                    
                      540f885a6d6e66248f116de2dd0a4210cbfa2dfa:2024-02-29:33352
                    
                

Test fail (29/304)

Order failed test Duration
39 TestAddons/parallel/Ingress 161.9
53 TestAddons/StoppedEnableDisable 154.16
165 TestIngressAddonLegacy/StartLegacyK8sCluster 286.1
167 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 97.07
168 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 88.86
169 TestIngressAddonLegacy/serial/ValidateIngressAddons 0.23
224 TestMultiNode/serial/RestartKeepsNodes 690.5
226 TestMultiNode/serial/StopMultiNode 142.12
233 TestPreload 348.92
241 TestKubernetesUpgrade 418.76
277 TestStartStop/group/old-k8s-version/serial/FirstStart 270.84
278 TestPause/serial/SecondStartNoReconfiguration 72.04
288 TestStartStop/group/no-preload/serial/Stop 138.78
290 TestStartStop/group/embed-certs/serial/Stop 138.74
293 TestStartStop/group/default-k8s-diff-port/serial/Stop 138.92
294 TestStartStop/group/old-k8s-version/serial/DeployApp 0.51
295 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 107.75
296 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.41
297 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
302 TestStartStop/group/old-k8s-version/serial/SecondStart 774.88
303 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
305 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.84
306 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.52
307 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 542.65
308 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 542.45
309 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 104.4
310 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 114.68
313 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 168.66
314 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 12.41
x
+
TestAddons/parallel/Ingress (161.9s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-848237 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-848237 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-848237 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [4e49acc6-b997-4f27-b129-34cfa10cb8cb] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [4e49acc6-b997-4f27-b129-34cfa10cb8cb] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 17.005097387s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-848237 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-848237 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m12.103189386s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-848237 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-848237 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.195
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-848237 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p addons-848237 addons disable ingress-dns --alsologtostderr -v=1: (1.491775274s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-848237 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-848237 addons disable ingress --alsologtostderr -v=1: (7.970007314s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-848237 -n addons-848237
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-848237 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-848237 logs -n 25: (1.393572541s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-928093                                                                     | download-only-928093 | jenkins | v1.32.0 | 29 Feb 24 17:40 UTC | 29 Feb 24 17:40 UTC |
	| delete  | -p download-only-392053                                                                     | download-only-392053 | jenkins | v1.32.0 | 29 Feb 24 17:40 UTC | 29 Feb 24 17:40 UTC |
	| delete  | -p download-only-181797                                                                     | download-only-181797 | jenkins | v1.32.0 | 29 Feb 24 17:40 UTC | 29 Feb 24 17:40 UTC |
	| delete  | -p download-only-928093                                                                     | download-only-928093 | jenkins | v1.32.0 | 29 Feb 24 17:40 UTC | 29 Feb 24 17:40 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-104722 | jenkins | v1.32.0 | 29 Feb 24 17:40 UTC |                     |
	|         | binary-mirror-104722                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:36219                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-104722                                                                     | binary-mirror-104722 | jenkins | v1.32.0 | 29 Feb 24 17:40 UTC | 29 Feb 24 17:40 UTC |
	| addons  | enable dashboard -p                                                                         | addons-848237        | jenkins | v1.32.0 | 29 Feb 24 17:40 UTC |                     |
	|         | addons-848237                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-848237        | jenkins | v1.32.0 | 29 Feb 24 17:40 UTC |                     |
	|         | addons-848237                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-848237 --wait=true                                                                | addons-848237        | jenkins | v1.32.0 | 29 Feb 24 17:40 UTC | 29 Feb 24 17:42 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=kvm2                                                                 |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-848237 addons                                                                        | addons-848237        | jenkins | v1.32.0 | 29 Feb 24 17:42 UTC | 29 Feb 24 17:42 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-848237 ssh cat                                                                       | addons-848237        | jenkins | v1.32.0 | 29 Feb 24 17:42 UTC | 29 Feb 24 17:42 UTC |
	|         | /opt/local-path-provisioner/pvc-1474dba4-8760-495c-bcc0-f8b3ca2ce82e_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-848237 addons disable                                                                | addons-848237        | jenkins | v1.32.0 | 29 Feb 24 17:42 UTC | 29 Feb 24 17:43 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-848237        | jenkins | v1.32.0 | 29 Feb 24 17:42 UTC | 29 Feb 24 17:42 UTC |
	|         | addons-848237                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-848237        | jenkins | v1.32.0 | 29 Feb 24 17:42 UTC | 29 Feb 24 17:42 UTC |
	|         | -p addons-848237                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-848237 ip                                                                            | addons-848237        | jenkins | v1.32.0 | 29 Feb 24 17:43 UTC | 29 Feb 24 17:43 UTC |
	| addons  | addons-848237 addons disable                                                                | addons-848237        | jenkins | v1.32.0 | 29 Feb 24 17:43 UTC | 29 Feb 24 17:43 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-848237        | jenkins | v1.32.0 | 29 Feb 24 17:43 UTC | 29 Feb 24 17:43 UTC |
	|         | addons-848237                                                                               |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-848237        | jenkins | v1.32.0 | 29 Feb 24 17:43 UTC | 29 Feb 24 17:43 UTC |
	|         | -p addons-848237                                                                            |                      |         |         |                     |                     |
	| ssh     | addons-848237 ssh curl -s                                                                   | addons-848237        | jenkins | v1.32.0 | 29 Feb 24 17:43 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-848237 addons disable                                                                | addons-848237        | jenkins | v1.32.0 | 29 Feb 24 17:43 UTC | 29 Feb 24 17:43 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-848237 addons                                                                        | addons-848237        | jenkins | v1.32.0 | 29 Feb 24 17:43 UTC | 29 Feb 24 17:43 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-848237 addons                                                                        | addons-848237        | jenkins | v1.32.0 | 29 Feb 24 17:43 UTC | 29 Feb 24 17:43 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-848237 ip                                                                            | addons-848237        | jenkins | v1.32.0 | 29 Feb 24 17:45 UTC | 29 Feb 24 17:45 UTC |
	| addons  | addons-848237 addons disable                                                                | addons-848237        | jenkins | v1.32.0 | 29 Feb 24 17:45 UTC | 29 Feb 24 17:45 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-848237 addons disable                                                                | addons-848237        | jenkins | v1.32.0 | 29 Feb 24 17:45 UTC | 29 Feb 24 17:45 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 17:40:12
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 17:40:12.586146   14730 out.go:291] Setting OutFile to fd 1 ...
	I0229 17:40:12.586257   14730 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 17:40:12.586265   14730 out.go:304] Setting ErrFile to fd 2...
	I0229 17:40:12.586269   14730 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 17:40:12.586434   14730 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6428/.minikube/bin
	I0229 17:40:12.587542   14730 out.go:298] Setting JSON to false
	I0229 17:40:12.588617   14730 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1357,"bootTime":1709227056,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 17:40:12.588685   14730 start.go:139] virtualization: kvm guest
	I0229 17:40:12.590687   14730 out.go:177] * [addons-848237] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 17:40:12.592523   14730 out.go:177]   - MINIKUBE_LOCATION=18259
	I0229 17:40:12.593702   14730 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 17:40:12.592553   14730 notify.go:220] Checking for updates...
	I0229 17:40:12.595012   14730 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18259-6428/kubeconfig
	I0229 17:40:12.596378   14730 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6428/.minikube
	I0229 17:40:12.597805   14730 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 17:40:12.599297   14730 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 17:40:12.600589   14730 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 17:40:12.632177   14730 out.go:177] * Using the kvm2 driver based on user configuration
	I0229 17:40:12.633582   14730 start.go:299] selected driver: kvm2
	I0229 17:40:12.633598   14730 start.go:903] validating driver "kvm2" against <nil>
	I0229 17:40:12.633608   14730 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 17:40:12.634313   14730 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 17:40:12.634382   14730 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18259-6428/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 17:40:12.648810   14730 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 17:40:12.648853   14730 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 17:40:12.649086   14730 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0229 17:40:12.649159   14730 cni.go:84] Creating CNI manager for ""
	I0229 17:40:12.649174   14730 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 17:40:12.649181   14730 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0229 17:40:12.649192   14730 start_flags.go:323] config:
	{Name:addons-848237 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-848237 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 17:40:12.649354   14730 iso.go:125] acquiring lock: {Name:mk4b55cee5696fff78a1389276e4e011ad655e56 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 17:40:12.651074   14730 out.go:177] * Starting control plane node addons-848237 in cluster addons-848237
	I0229 17:40:12.652254   14730 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0229 17:40:12.652293   14730 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0229 17:40:12.652305   14730 cache.go:56] Caching tarball of preloaded images
	I0229 17:40:12.652382   14730 preload.go:174] Found /home/jenkins/minikube-integration/18259-6428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0229 17:40:12.652395   14730 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0229 17:40:12.652712   14730 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/addons-848237/config.json ...
	I0229 17:40:12.652735   14730 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/addons-848237/config.json: {Name:mkb38be676247b94a9ad402c942de1bf3b8948e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 17:40:12.652879   14730 start.go:365] acquiring machines lock for addons-848237: {Name:mk08b242c6b6b7cd4a6843a7d3821a8b435540dd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 17:40:12.652940   14730 start.go:369] acquired machines lock for "addons-848237" in 44.701µs
	I0229 17:40:12.652959   14730 start.go:93] Provisioning new machine with config: &{Name:addons-848237 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:addons-848237 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0229 17:40:12.653049   14730 start.go:125] createHost starting for "" (driver="kvm2")
	I0229 17:40:12.654644   14730 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0229 17:40:12.654779   14730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 17:40:12.654831   14730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 17:40:12.668808   14730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37487
	I0229 17:40:12.669186   14730 main.go:141] libmachine: () Calling .GetVersion
	I0229 17:40:12.669707   14730 main.go:141] libmachine: Using API Version  1
	I0229 17:40:12.669728   14730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 17:40:12.670049   14730 main.go:141] libmachine: () Calling .GetMachineName
	I0229 17:40:12.670297   14730 main.go:141] libmachine: (addons-848237) Calling .GetMachineName
	I0229 17:40:12.670459   14730 main.go:141] libmachine: (addons-848237) Calling .DriverName
	I0229 17:40:12.670603   14730 start.go:159] libmachine.API.Create for "addons-848237" (driver="kvm2")
	I0229 17:40:12.670634   14730 client.go:168] LocalClient.Create starting
	I0229 17:40:12.670674   14730 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem
	I0229 17:40:12.781610   14730 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem
	I0229 17:40:12.885022   14730 main.go:141] libmachine: Running pre-create checks...
	I0229 17:40:12.885042   14730 main.go:141] libmachine: (addons-848237) Calling .PreCreateCheck
	I0229 17:40:12.885553   14730 main.go:141] libmachine: (addons-848237) Calling .GetConfigRaw
	I0229 17:40:12.885966   14730 main.go:141] libmachine: Creating machine...
	I0229 17:40:12.885981   14730 main.go:141] libmachine: (addons-848237) Calling .Create
	I0229 17:40:12.886115   14730 main.go:141] libmachine: (addons-848237) Creating KVM machine...
	I0229 17:40:12.887328   14730 main.go:141] libmachine: (addons-848237) DBG | found existing default KVM network
	I0229 17:40:12.888030   14730 main.go:141] libmachine: (addons-848237) DBG | I0229 17:40:12.887894   14752 network.go:207] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012f990}
	I0229 17:40:12.893377   14730 main.go:141] libmachine: (addons-848237) DBG | trying to create private KVM network mk-addons-848237 192.168.39.0/24...
	I0229 17:40:12.952553   14730 main.go:141] libmachine: (addons-848237) DBG | private KVM network mk-addons-848237 192.168.39.0/24 created
	I0229 17:40:12.952573   14730 main.go:141] libmachine: (addons-848237) Setting up store path in /home/jenkins/minikube-integration/18259-6428/.minikube/machines/addons-848237 ...
	I0229 17:40:12.952582   14730 main.go:141] libmachine: (addons-848237) DBG | I0229 17:40:12.952515   14752 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18259-6428/.minikube
	I0229 17:40:12.952659   14730 main.go:141] libmachine: (addons-848237) Building disk image from file:///home/jenkins/minikube-integration/18259-6428/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso
	I0229 17:40:12.952709   14730 main.go:141] libmachine: (addons-848237) Downloading /home/jenkins/minikube-integration/18259-6428/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18259-6428/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0229 17:40:13.173670   14730 main.go:141] libmachine: (addons-848237) DBG | I0229 17:40:13.173573   14752 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/addons-848237/id_rsa...
	I0229 17:40:13.336053   14730 main.go:141] libmachine: (addons-848237) DBG | I0229 17:40:13.335914   14752 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/addons-848237/addons-848237.rawdisk...
	I0229 17:40:13.336083   14730 main.go:141] libmachine: (addons-848237) DBG | Writing magic tar header
	I0229 17:40:13.336097   14730 main.go:141] libmachine: (addons-848237) DBG | Writing SSH key tar header
	I0229 17:40:13.336116   14730 main.go:141] libmachine: (addons-848237) DBG | I0229 17:40:13.336036   14752 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18259-6428/.minikube/machines/addons-848237 ...
	I0229 17:40:13.336218   14730 main.go:141] libmachine: (addons-848237) Setting executable bit set on /home/jenkins/minikube-integration/18259-6428/.minikube/machines/addons-848237 (perms=drwx------)
	I0229 17:40:13.336260   14730 main.go:141] libmachine: (addons-848237) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/addons-848237
	I0229 17:40:13.336272   14730 main.go:141] libmachine: (addons-848237) Setting executable bit set on /home/jenkins/minikube-integration/18259-6428/.minikube/machines (perms=drwxr-xr-x)
	I0229 17:40:13.336290   14730 main.go:141] libmachine: (addons-848237) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6428/.minikube/machines
	I0229 17:40:13.336301   14730 main.go:141] libmachine: (addons-848237) Setting executable bit set on /home/jenkins/minikube-integration/18259-6428/.minikube (perms=drwxr-xr-x)
	I0229 17:40:13.336317   14730 main.go:141] libmachine: (addons-848237) Setting executable bit set on /home/jenkins/minikube-integration/18259-6428 (perms=drwxrwxr-x)
	I0229 17:40:13.336324   14730 main.go:141] libmachine: (addons-848237) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0229 17:40:13.336334   14730 main.go:141] libmachine: (addons-848237) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0229 17:40:13.336348   14730 main.go:141] libmachine: (addons-848237) Creating domain...
	I0229 17:40:13.336362   14730 main.go:141] libmachine: (addons-848237) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6428/.minikube
	I0229 17:40:13.336376   14730 main.go:141] libmachine: (addons-848237) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6428
	I0229 17:40:13.336389   14730 main.go:141] libmachine: (addons-848237) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0229 17:40:13.336401   14730 main.go:141] libmachine: (addons-848237) DBG | Checking permissions on dir: /home/jenkins
	I0229 17:40:13.336409   14730 main.go:141] libmachine: (addons-848237) DBG | Checking permissions on dir: /home
	I0229 17:40:13.336416   14730 main.go:141] libmachine: (addons-848237) DBG | Skipping /home - not owner
	I0229 17:40:13.337303   14730 main.go:141] libmachine: (addons-848237) define libvirt domain using xml: 
	I0229 17:40:13.337327   14730 main.go:141] libmachine: (addons-848237) <domain type='kvm'>
	I0229 17:40:13.337356   14730 main.go:141] libmachine: (addons-848237)   <name>addons-848237</name>
	I0229 17:40:13.337381   14730 main.go:141] libmachine: (addons-848237)   <memory unit='MiB'>4000</memory>
	I0229 17:40:13.337390   14730 main.go:141] libmachine: (addons-848237)   <vcpu>2</vcpu>
	I0229 17:40:13.337400   14730 main.go:141] libmachine: (addons-848237)   <features>
	I0229 17:40:13.337409   14730 main.go:141] libmachine: (addons-848237)     <acpi/>
	I0229 17:40:13.337419   14730 main.go:141] libmachine: (addons-848237)     <apic/>
	I0229 17:40:13.337428   14730 main.go:141] libmachine: (addons-848237)     <pae/>
	I0229 17:40:13.337437   14730 main.go:141] libmachine: (addons-848237)     
	I0229 17:40:13.337447   14730 main.go:141] libmachine: (addons-848237)   </features>
	I0229 17:40:13.337455   14730 main.go:141] libmachine: (addons-848237)   <cpu mode='host-passthrough'>
	I0229 17:40:13.337463   14730 main.go:141] libmachine: (addons-848237)   
	I0229 17:40:13.337472   14730 main.go:141] libmachine: (addons-848237)   </cpu>
	I0229 17:40:13.337484   14730 main.go:141] libmachine: (addons-848237)   <os>
	I0229 17:40:13.337495   14730 main.go:141] libmachine: (addons-848237)     <type>hvm</type>
	I0229 17:40:13.337507   14730 main.go:141] libmachine: (addons-848237)     <boot dev='cdrom'/>
	I0229 17:40:13.337520   14730 main.go:141] libmachine: (addons-848237)     <boot dev='hd'/>
	I0229 17:40:13.337540   14730 main.go:141] libmachine: (addons-848237)     <bootmenu enable='no'/>
	I0229 17:40:13.337553   14730 main.go:141] libmachine: (addons-848237)   </os>
	I0229 17:40:13.337567   14730 main.go:141] libmachine: (addons-848237)   <devices>
	I0229 17:40:13.337579   14730 main.go:141] libmachine: (addons-848237)     <disk type='file' device='cdrom'>
	I0229 17:40:13.337596   14730 main.go:141] libmachine: (addons-848237)       <source file='/home/jenkins/minikube-integration/18259-6428/.minikube/machines/addons-848237/boot2docker.iso'/>
	I0229 17:40:13.337606   14730 main.go:141] libmachine: (addons-848237)       <target dev='hdc' bus='scsi'/>
	I0229 17:40:13.337619   14730 main.go:141] libmachine: (addons-848237)       <readonly/>
	I0229 17:40:13.337628   14730 main.go:141] libmachine: (addons-848237)     </disk>
	I0229 17:40:13.337642   14730 main.go:141] libmachine: (addons-848237)     <disk type='file' device='disk'>
	I0229 17:40:13.337663   14730 main.go:141] libmachine: (addons-848237)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0229 17:40:13.337684   14730 main.go:141] libmachine: (addons-848237)       <source file='/home/jenkins/minikube-integration/18259-6428/.minikube/machines/addons-848237/addons-848237.rawdisk'/>
	I0229 17:40:13.337698   14730 main.go:141] libmachine: (addons-848237)       <target dev='hda' bus='virtio'/>
	I0229 17:40:13.337709   14730 main.go:141] libmachine: (addons-848237)     </disk>
	I0229 17:40:13.337717   14730 main.go:141] libmachine: (addons-848237)     <interface type='network'>
	I0229 17:40:13.337726   14730 main.go:141] libmachine: (addons-848237)       <source network='mk-addons-848237'/>
	I0229 17:40:13.337736   14730 main.go:141] libmachine: (addons-848237)       <model type='virtio'/>
	I0229 17:40:13.337751   14730 main.go:141] libmachine: (addons-848237)     </interface>
	I0229 17:40:13.337762   14730 main.go:141] libmachine: (addons-848237)     <interface type='network'>
	I0229 17:40:13.337774   14730 main.go:141] libmachine: (addons-848237)       <source network='default'/>
	I0229 17:40:13.337784   14730 main.go:141] libmachine: (addons-848237)       <model type='virtio'/>
	I0229 17:40:13.337795   14730 main.go:141] libmachine: (addons-848237)     </interface>
	I0229 17:40:13.337804   14730 main.go:141] libmachine: (addons-848237)     <serial type='pty'>
	I0229 17:40:13.337809   14730 main.go:141] libmachine: (addons-848237)       <target port='0'/>
	I0229 17:40:13.337818   14730 main.go:141] libmachine: (addons-848237)     </serial>
	I0229 17:40:13.337830   14730 main.go:141] libmachine: (addons-848237)     <console type='pty'>
	I0229 17:40:13.337841   14730 main.go:141] libmachine: (addons-848237)       <target type='serial' port='0'/>
	I0229 17:40:13.337852   14730 main.go:141] libmachine: (addons-848237)     </console>
	I0229 17:40:13.337862   14730 main.go:141] libmachine: (addons-848237)     <rng model='virtio'>
	I0229 17:40:13.337874   14730 main.go:141] libmachine: (addons-848237)       <backend model='random'>/dev/random</backend>
	I0229 17:40:13.337883   14730 main.go:141] libmachine: (addons-848237)     </rng>
	I0229 17:40:13.337891   14730 main.go:141] libmachine: (addons-848237)     
	I0229 17:40:13.337899   14730 main.go:141] libmachine: (addons-848237)     
	I0229 17:40:13.337925   14730 main.go:141] libmachine: (addons-848237)   </devices>
	I0229 17:40:13.337948   14730 main.go:141] libmachine: (addons-848237) </domain>
	I0229 17:40:13.337966   14730 main.go:141] libmachine: (addons-848237) 
	I0229 17:40:13.343866   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined MAC address 52:54:00:15:df:68 in network default
	I0229 17:40:13.344353   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:40:13.344392   14730 main.go:141] libmachine: (addons-848237) Ensuring networks are active...
	I0229 17:40:13.344941   14730 main.go:141] libmachine: (addons-848237) Ensuring network default is active
	I0229 17:40:13.345198   14730 main.go:141] libmachine: (addons-848237) Ensuring network mk-addons-848237 is active
	I0229 17:40:13.345617   14730 main.go:141] libmachine: (addons-848237) Getting domain xml...
	I0229 17:40:13.346178   14730 main.go:141] libmachine: (addons-848237) Creating domain...
	I0229 17:40:14.691743   14730 main.go:141] libmachine: (addons-848237) Waiting to get IP...
	I0229 17:40:14.692638   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:40:14.693031   14730 main.go:141] libmachine: (addons-848237) DBG | unable to find current IP address of domain addons-848237 in network mk-addons-848237
	I0229 17:40:14.693069   14730 main.go:141] libmachine: (addons-848237) DBG | I0229 17:40:14.693023   14752 retry.go:31] will retry after 263.679478ms: waiting for machine to come up
	I0229 17:40:14.958578   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:40:14.958988   14730 main.go:141] libmachine: (addons-848237) DBG | unable to find current IP address of domain addons-848237 in network mk-addons-848237
	I0229 17:40:14.959042   14730 main.go:141] libmachine: (addons-848237) DBG | I0229 17:40:14.958956   14752 retry.go:31] will retry after 357.559492ms: waiting for machine to come up
	I0229 17:40:15.318680   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:40:15.319110   14730 main.go:141] libmachine: (addons-848237) DBG | unable to find current IP address of domain addons-848237 in network mk-addons-848237
	I0229 17:40:15.319138   14730 main.go:141] libmachine: (addons-848237) DBG | I0229 17:40:15.319075   14752 retry.go:31] will retry after 302.743381ms: waiting for machine to come up
	I0229 17:40:15.623475   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:40:15.623934   14730 main.go:141] libmachine: (addons-848237) DBG | unable to find current IP address of domain addons-848237 in network mk-addons-848237
	I0229 17:40:15.623963   14730 main.go:141] libmachine: (addons-848237) DBG | I0229 17:40:15.623878   14752 retry.go:31] will retry after 601.165717ms: waiting for machine to come up
	I0229 17:40:16.226241   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:40:16.226664   14730 main.go:141] libmachine: (addons-848237) DBG | unable to find current IP address of domain addons-848237 in network mk-addons-848237
	I0229 17:40:16.226725   14730 main.go:141] libmachine: (addons-848237) DBG | I0229 17:40:16.226671   14752 retry.go:31] will retry after 675.3008ms: waiting for machine to come up
	I0229 17:40:16.903092   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:40:16.903514   14730 main.go:141] libmachine: (addons-848237) DBG | unable to find current IP address of domain addons-848237 in network mk-addons-848237
	I0229 17:40:16.903542   14730 main.go:141] libmachine: (addons-848237) DBG | I0229 17:40:16.903466   14752 retry.go:31] will retry after 934.047781ms: waiting for machine to come up
	I0229 17:40:17.838958   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:40:17.839354   14730 main.go:141] libmachine: (addons-848237) DBG | unable to find current IP address of domain addons-848237 in network mk-addons-848237
	I0229 17:40:17.839376   14730 main.go:141] libmachine: (addons-848237) DBG | I0229 17:40:17.839317   14752 retry.go:31] will retry after 1.131277565s: waiting for machine to come up
	I0229 17:40:18.972635   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:40:18.973005   14730 main.go:141] libmachine: (addons-848237) DBG | unable to find current IP address of domain addons-848237 in network mk-addons-848237
	I0229 17:40:18.973026   14730 main.go:141] libmachine: (addons-848237) DBG | I0229 17:40:18.972969   14752 retry.go:31] will retry after 1.04198983s: waiting for machine to come up
	I0229 17:40:20.016025   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:40:20.016499   14730 main.go:141] libmachine: (addons-848237) DBG | unable to find current IP address of domain addons-848237 in network mk-addons-848237
	I0229 17:40:20.016522   14730 main.go:141] libmachine: (addons-848237) DBG | I0229 17:40:20.016452   14752 retry.go:31] will retry after 1.242141481s: waiting for machine to come up
	I0229 17:40:21.260735   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:40:21.261113   14730 main.go:141] libmachine: (addons-848237) DBG | unable to find current IP address of domain addons-848237 in network mk-addons-848237
	I0229 17:40:21.261141   14730 main.go:141] libmachine: (addons-848237) DBG | I0229 17:40:21.261072   14752 retry.go:31] will retry after 1.473377906s: waiting for machine to come up
	I0229 17:40:22.736339   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:40:22.736787   14730 main.go:141] libmachine: (addons-848237) DBG | unable to find current IP address of domain addons-848237 in network mk-addons-848237
	I0229 17:40:22.736848   14730 main.go:141] libmachine: (addons-848237) DBG | I0229 17:40:22.736757   14752 retry.go:31] will retry after 2.761539779s: waiting for machine to come up
	I0229 17:40:25.500282   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:40:25.500750   14730 main.go:141] libmachine: (addons-848237) DBG | unable to find current IP address of domain addons-848237 in network mk-addons-848237
	I0229 17:40:25.500778   14730 main.go:141] libmachine: (addons-848237) DBG | I0229 17:40:25.500707   14752 retry.go:31] will retry after 2.561410913s: waiting for machine to come up
	I0229 17:40:28.063643   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:40:28.064024   14730 main.go:141] libmachine: (addons-848237) DBG | unable to find current IP address of domain addons-848237 in network mk-addons-848237
	I0229 17:40:28.064056   14730 main.go:141] libmachine: (addons-848237) DBG | I0229 17:40:28.063982   14752 retry.go:31] will retry after 3.379512152s: waiting for machine to come up
	I0229 17:40:31.447332   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:40:31.447649   14730 main.go:141] libmachine: (addons-848237) DBG | unable to find current IP address of domain addons-848237 in network mk-addons-848237
	I0229 17:40:31.447677   14730 main.go:141] libmachine: (addons-848237) DBG | I0229 17:40:31.447603   14752 retry.go:31] will retry after 3.596416463s: waiting for machine to come up
	I0229 17:40:35.046987   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:40:35.047412   14730 main.go:141] libmachine: (addons-848237) Found IP for machine: 192.168.39.195
	I0229 17:40:35.047438   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has current primary IP address 192.168.39.195 and MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:40:35.047451   14730 main.go:141] libmachine: (addons-848237) Reserving static IP address...
	I0229 17:40:35.047876   14730 main.go:141] libmachine: (addons-848237) DBG | unable to find host DHCP lease matching {name: "addons-848237", mac: "52:54:00:08:26:7d", ip: "192.168.39.195"} in network mk-addons-848237
	I0229 17:40:35.116712   14730 main.go:141] libmachine: (addons-848237) DBG | Getting to WaitForSSH function...
	I0229 17:40:35.116740   14730 main.go:141] libmachine: (addons-848237) Reserved static IP address: 192.168.39.195
	I0229 17:40:35.116750   14730 main.go:141] libmachine: (addons-848237) Waiting for SSH to be available...
	I0229 17:40:35.119389   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:40:35.119766   14730 main.go:141] libmachine: (addons-848237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:26:7d", ip: ""} in network mk-addons-848237: {Iface:virbr1 ExpiryTime:2024-02-29 18:40:28 +0000 UTC Type:0 Mac:52:54:00:08:26:7d Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:minikube Clientid:01:52:54:00:08:26:7d}
	I0229 17:40:35.119795   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined IP address 192.168.39.195 and MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:40:35.119984   14730 main.go:141] libmachine: (addons-848237) DBG | Using SSH client type: external
	I0229 17:40:35.120024   14730 main.go:141] libmachine: (addons-848237) DBG | Using SSH private key: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/addons-848237/id_rsa (-rw-------)
	I0229 17:40:35.120064   14730 main.go:141] libmachine: (addons-848237) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.195 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18259-6428/.minikube/machines/addons-848237/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 17:40:35.120083   14730 main.go:141] libmachine: (addons-848237) DBG | About to run SSH command:
	I0229 17:40:35.120094   14730 main.go:141] libmachine: (addons-848237) DBG | exit 0
	I0229 17:40:35.255101   14730 main.go:141] libmachine: (addons-848237) DBG | SSH cmd err, output: <nil>: 
	I0229 17:40:35.255333   14730 main.go:141] libmachine: (addons-848237) KVM machine creation complete!
	I0229 17:40:35.255563   14730 main.go:141] libmachine: (addons-848237) Calling .GetConfigRaw
	I0229 17:40:35.256124   14730 main.go:141] libmachine: (addons-848237) Calling .DriverName
	I0229 17:40:35.256311   14730 main.go:141] libmachine: (addons-848237) Calling .DriverName
	I0229 17:40:35.256470   14730 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0229 17:40:35.256485   14730 main.go:141] libmachine: (addons-848237) Calling .GetState
	I0229 17:40:35.257688   14730 main.go:141] libmachine: Detecting operating system of created instance...
	I0229 17:40:35.257708   14730 main.go:141] libmachine: Waiting for SSH to be available...
	I0229 17:40:35.257713   14730 main.go:141] libmachine: Getting to WaitForSSH function...
	I0229 17:40:35.257720   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHHostname
	I0229 17:40:35.259736   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:40:35.260065   14730 main.go:141] libmachine: (addons-848237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:26:7d", ip: ""} in network mk-addons-848237: {Iface:virbr1 ExpiryTime:2024-02-29 18:40:28 +0000 UTC Type:0 Mac:52:54:00:08:26:7d Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-848237 Clientid:01:52:54:00:08:26:7d}
	I0229 17:40:35.260086   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined IP address 192.168.39.195 and MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:40:35.260246   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHPort
	I0229 17:40:35.260412   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHKeyPath
	I0229 17:40:35.260562   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHKeyPath
	I0229 17:40:35.260681   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHUsername
	I0229 17:40:35.260855   14730 main.go:141] libmachine: Using SSH client type: native
	I0229 17:40:35.261065   14730 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0229 17:40:35.261076   14730 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0229 17:40:35.366450   14730 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 17:40:35.366469   14730 main.go:141] libmachine: Detecting the provisioner...
	I0229 17:40:35.366476   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHHostname
	I0229 17:40:35.369291   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:40:35.369595   14730 main.go:141] libmachine: (addons-848237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:26:7d", ip: ""} in network mk-addons-848237: {Iface:virbr1 ExpiryTime:2024-02-29 18:40:28 +0000 UTC Type:0 Mac:52:54:00:08:26:7d Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-848237 Clientid:01:52:54:00:08:26:7d}
	I0229 17:40:35.369623   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined IP address 192.168.39.195 and MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:40:35.369793   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHPort
	I0229 17:40:35.369977   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHKeyPath
	I0229 17:40:35.370134   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHKeyPath
	I0229 17:40:35.370253   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHUsername
	I0229 17:40:35.370443   14730 main.go:141] libmachine: Using SSH client type: native
	I0229 17:40:35.370617   14730 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0229 17:40:35.370631   14730 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0229 17:40:35.480192   14730 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0229 17:40:35.480269   14730 main.go:141] libmachine: found compatible host: buildroot
	I0229 17:40:35.480282   14730 main.go:141] libmachine: Provisioning with buildroot...
	I0229 17:40:35.480293   14730 main.go:141] libmachine: (addons-848237) Calling .GetMachineName
	I0229 17:40:35.480572   14730 buildroot.go:166] provisioning hostname "addons-848237"
	I0229 17:40:35.480593   14730 main.go:141] libmachine: (addons-848237) Calling .GetMachineName
	I0229 17:40:35.480750   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHHostname
	I0229 17:40:35.483444   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:40:35.483827   14730 main.go:141] libmachine: (addons-848237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:26:7d", ip: ""} in network mk-addons-848237: {Iface:virbr1 ExpiryTime:2024-02-29 18:40:28 +0000 UTC Type:0 Mac:52:54:00:08:26:7d Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-848237 Clientid:01:52:54:00:08:26:7d}
	I0229 17:40:35.483850   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined IP address 192.168.39.195 and MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:40:35.483995   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHPort
	I0229 17:40:35.484153   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHKeyPath
	I0229 17:40:35.484291   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHKeyPath
	I0229 17:40:35.484425   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHUsername
	I0229 17:40:35.484568   14730 main.go:141] libmachine: Using SSH client type: native
	I0229 17:40:35.484715   14730 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0229 17:40:35.484727   14730 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-848237 && echo "addons-848237" | sudo tee /etc/hostname
	I0229 17:40:35.606424   14730 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-848237
	
	I0229 17:40:35.606452   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHHostname
	I0229 17:40:35.608853   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:40:35.609148   14730 main.go:141] libmachine: (addons-848237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:26:7d", ip: ""} in network mk-addons-848237: {Iface:virbr1 ExpiryTime:2024-02-29 18:40:28 +0000 UTC Type:0 Mac:52:54:00:08:26:7d Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-848237 Clientid:01:52:54:00:08:26:7d}
	I0229 17:40:35.609186   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined IP address 192.168.39.195 and MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:40:35.609340   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHPort
	I0229 17:40:35.609537   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHKeyPath
	I0229 17:40:35.609677   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHKeyPath
	I0229 17:40:35.609800   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHUsername
	I0229 17:40:35.609919   14730 main.go:141] libmachine: Using SSH client type: native
	I0229 17:40:35.610084   14730 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0229 17:40:35.610099   14730 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-848237' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-848237/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-848237' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 17:40:35.724164   14730 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 17:40:35.724193   14730 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18259-6428/.minikube CaCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18259-6428/.minikube}
	I0229 17:40:35.724226   14730 buildroot.go:174] setting up certificates
	I0229 17:40:35.724241   14730 provision.go:83] configureAuth start
	I0229 17:40:35.724254   14730 main.go:141] libmachine: (addons-848237) Calling .GetMachineName
	I0229 17:40:35.724544   14730 main.go:141] libmachine: (addons-848237) Calling .GetIP
	I0229 17:40:35.726909   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:40:35.727335   14730 main.go:141] libmachine: (addons-848237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:26:7d", ip: ""} in network mk-addons-848237: {Iface:virbr1 ExpiryTime:2024-02-29 18:40:28 +0000 UTC Type:0 Mac:52:54:00:08:26:7d Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-848237 Clientid:01:52:54:00:08:26:7d}
	I0229 17:40:35.727349   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined IP address 192.168.39.195 and MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:40:35.727554   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHHostname
	I0229 17:40:35.729659   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:40:35.730003   14730 main.go:141] libmachine: (addons-848237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:26:7d", ip: ""} in network mk-addons-848237: {Iface:virbr1 ExpiryTime:2024-02-29 18:40:28 +0000 UTC Type:0 Mac:52:54:00:08:26:7d Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-848237 Clientid:01:52:54:00:08:26:7d}
	I0229 17:40:35.730025   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined IP address 192.168.39.195 and MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:40:35.730155   14730 provision.go:138] copyHostCerts
	I0229 17:40:35.730217   14730 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem (1082 bytes)
	I0229 17:40:35.730355   14730 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem (1123 bytes)
	I0229 17:40:35.730432   14730 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem (1675 bytes)
	I0229 17:40:35.730476   14730 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem org=jenkins.addons-848237 san=[192.168.39.195 192.168.39.195 localhost 127.0.0.1 minikube addons-848237]
	I0229 17:40:36.089825   14730 provision.go:172] copyRemoteCerts
	I0229 17:40:36.089883   14730 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 17:40:36.089909   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHHostname
	I0229 17:40:36.092446   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:40:36.092776   14730 main.go:141] libmachine: (addons-848237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:26:7d", ip: ""} in network mk-addons-848237: {Iface:virbr1 ExpiryTime:2024-02-29 18:40:28 +0000 UTC Type:0 Mac:52:54:00:08:26:7d Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-848237 Clientid:01:52:54:00:08:26:7d}
	I0229 17:40:36.092801   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined IP address 192.168.39.195 and MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:40:36.092977   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHPort
	I0229 17:40:36.093180   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHKeyPath
	I0229 17:40:36.093411   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHUsername
	I0229 17:40:36.093569   14730 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/addons-848237/id_rsa Username:docker}
	I0229 17:40:36.178852   14730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 17:40:36.204849   14730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0229 17:40:36.231165   14730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0229 17:40:36.256642   14730 provision.go:86] duration metric: configureAuth took 532.389824ms
	I0229 17:40:36.256668   14730 buildroot.go:189] setting minikube options for container-runtime
	I0229 17:40:36.256856   14730 config.go:182] Loaded profile config "addons-848237": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 17:40:36.256935   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHHostname
	I0229 17:40:36.259584   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:40:36.259944   14730 main.go:141] libmachine: (addons-848237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:26:7d", ip: ""} in network mk-addons-848237: {Iface:virbr1 ExpiryTime:2024-02-29 18:40:28 +0000 UTC Type:0 Mac:52:54:00:08:26:7d Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-848237 Clientid:01:52:54:00:08:26:7d}
	I0229 17:40:36.259981   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined IP address 192.168.39.195 and MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:40:36.260091   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHPort
	I0229 17:40:36.260295   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHKeyPath
	I0229 17:40:36.260474   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHKeyPath
	I0229 17:40:36.260631   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHUsername
	I0229 17:40:36.260765   14730 main.go:141] libmachine: Using SSH client type: native
	I0229 17:40:36.260910   14730 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0229 17:40:36.260922   14730 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0229 17:40:36.530593   14730 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0229 17:40:36.530630   14730 main.go:141] libmachine: Checking connection to Docker...
	I0229 17:40:36.530638   14730 main.go:141] libmachine: (addons-848237) Calling .GetURL
	I0229 17:40:36.531849   14730 main.go:141] libmachine: (addons-848237) DBG | Using libvirt version 6000000
	I0229 17:40:36.533929   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:40:36.534209   14730 main.go:141] libmachine: (addons-848237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:26:7d", ip: ""} in network mk-addons-848237: {Iface:virbr1 ExpiryTime:2024-02-29 18:40:28 +0000 UTC Type:0 Mac:52:54:00:08:26:7d Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-848237 Clientid:01:52:54:00:08:26:7d}
	I0229 17:40:36.534232   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined IP address 192.168.39.195 and MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:40:36.534371   14730 main.go:141] libmachine: Docker is up and running!
	I0229 17:40:36.534383   14730 main.go:141] libmachine: Reticulating splines...
	I0229 17:40:36.534389   14730 client.go:171] LocalClient.Create took 23.863745493s
	I0229 17:40:36.534407   14730 start.go:167] duration metric: libmachine.API.Create for "addons-848237" took 23.863806516s
	I0229 17:40:36.534422   14730 start.go:300] post-start starting for "addons-848237" (driver="kvm2")
	I0229 17:40:36.534434   14730 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 17:40:36.534452   14730 main.go:141] libmachine: (addons-848237) Calling .DriverName
	I0229 17:40:36.534699   14730 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 17:40:36.534718   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHHostname
	I0229 17:40:36.536609   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:40:36.536897   14730 main.go:141] libmachine: (addons-848237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:26:7d", ip: ""} in network mk-addons-848237: {Iface:virbr1 ExpiryTime:2024-02-29 18:40:28 +0000 UTC Type:0 Mac:52:54:00:08:26:7d Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-848237 Clientid:01:52:54:00:08:26:7d}
	I0229 17:40:36.536922   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined IP address 192.168.39.195 and MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:40:36.537048   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHPort
	I0229 17:40:36.537208   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHKeyPath
	I0229 17:40:36.537375   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHUsername
	I0229 17:40:36.537507   14730 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/addons-848237/id_rsa Username:docker}
	I0229 17:40:36.621694   14730 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 17:40:36.626701   14730 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 17:40:36.626719   14730 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6428/.minikube/addons for local assets ...
	I0229 17:40:36.626777   14730 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6428/.minikube/files for local assets ...
	I0229 17:40:36.626800   14730 start.go:303] post-start completed in 92.369938ms
	I0229 17:40:36.626828   14730 main.go:141] libmachine: (addons-848237) Calling .GetConfigRaw
	I0229 17:40:36.627371   14730 main.go:141] libmachine: (addons-848237) Calling .GetIP
	I0229 17:40:36.629751   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:40:36.630049   14730 main.go:141] libmachine: (addons-848237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:26:7d", ip: ""} in network mk-addons-848237: {Iface:virbr1 ExpiryTime:2024-02-29 18:40:28 +0000 UTC Type:0 Mac:52:54:00:08:26:7d Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-848237 Clientid:01:52:54:00:08:26:7d}
	I0229 17:40:36.630071   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined IP address 192.168.39.195 and MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:40:36.630262   14730 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/addons-848237/config.json ...
	I0229 17:40:36.630417   14730 start.go:128] duration metric: createHost completed in 23.977358818s
	I0229 17:40:36.630436   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHHostname
	I0229 17:40:36.632353   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:40:36.632617   14730 main.go:141] libmachine: (addons-848237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:26:7d", ip: ""} in network mk-addons-848237: {Iface:virbr1 ExpiryTime:2024-02-29 18:40:28 +0000 UTC Type:0 Mac:52:54:00:08:26:7d Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-848237 Clientid:01:52:54:00:08:26:7d}
	I0229 17:40:36.632650   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined IP address 192.168.39.195 and MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:40:36.632783   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHPort
	I0229 17:40:36.632978   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHKeyPath
	I0229 17:40:36.633125   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHKeyPath
	I0229 17:40:36.633246   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHUsername
	I0229 17:40:36.633408   14730 main.go:141] libmachine: Using SSH client type: native
	I0229 17:40:36.633555   14730 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0229 17:40:36.633565   14730 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 17:40:36.740183   14730 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709228436.720814545
	
	I0229 17:40:36.740202   14730 fix.go:206] guest clock: 1709228436.720814545
	I0229 17:40:36.740209   14730 fix.go:219] Guest: 2024-02-29 17:40:36.720814545 +0000 UTC Remote: 2024-02-29 17:40:36.630427422 +0000 UTC m=+24.088213599 (delta=90.387123ms)
	I0229 17:40:36.740237   14730 fix.go:190] guest clock delta is within tolerance: 90.387123ms
	I0229 17:40:36.740242   14730 start.go:83] releasing machines lock for "addons-848237", held for 24.087290389s
	I0229 17:40:36.740259   14730 main.go:141] libmachine: (addons-848237) Calling .DriverName
	I0229 17:40:36.740526   14730 main.go:141] libmachine: (addons-848237) Calling .GetIP
	I0229 17:40:36.742774   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:40:36.743100   14730 main.go:141] libmachine: (addons-848237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:26:7d", ip: ""} in network mk-addons-848237: {Iface:virbr1 ExpiryTime:2024-02-29 18:40:28 +0000 UTC Type:0 Mac:52:54:00:08:26:7d Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-848237 Clientid:01:52:54:00:08:26:7d}
	I0229 17:40:36.743129   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined IP address 192.168.39.195 and MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:40:36.743266   14730 main.go:141] libmachine: (addons-848237) Calling .DriverName
	I0229 17:40:36.743714   14730 main.go:141] libmachine: (addons-848237) Calling .DriverName
	I0229 17:40:36.743858   14730 main.go:141] libmachine: (addons-848237) Calling .DriverName
	I0229 17:40:36.743961   14730 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 17:40:36.743996   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHHostname
	I0229 17:40:36.744073   14730 ssh_runner.go:195] Run: cat /version.json
	I0229 17:40:36.744096   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHHostname
	I0229 17:40:36.746482   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:40:36.746734   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:40:36.746760   14730 main.go:141] libmachine: (addons-848237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:26:7d", ip: ""} in network mk-addons-848237: {Iface:virbr1 ExpiryTime:2024-02-29 18:40:28 +0000 UTC Type:0 Mac:52:54:00:08:26:7d Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-848237 Clientid:01:52:54:00:08:26:7d}
	I0229 17:40:36.746776   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined IP address 192.168.39.195 and MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:40:36.746945   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHPort
	I0229 17:40:36.747148   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHKeyPath
	I0229 17:40:36.747167   14730 main.go:141] libmachine: (addons-848237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:26:7d", ip: ""} in network mk-addons-848237: {Iface:virbr1 ExpiryTime:2024-02-29 18:40:28 +0000 UTC Type:0 Mac:52:54:00:08:26:7d Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-848237 Clientid:01:52:54:00:08:26:7d}
	I0229 17:40:36.747181   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined IP address 192.168.39.195 and MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:40:36.747276   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHUsername
	I0229 17:40:36.747349   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHPort
	I0229 17:40:36.747434   14730 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/addons-848237/id_rsa Username:docker}
	I0229 17:40:36.747508   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHKeyPath
	I0229 17:40:36.747626   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHUsername
	I0229 17:40:36.747742   14730 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/addons-848237/id_rsa Username:docker}
	I0229 17:40:36.824590   14730 ssh_runner.go:195] Run: systemctl --version
	I0229 17:40:36.849556   14730 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0229 17:40:37.012213   14730 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 17:40:37.018701   14730 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 17:40:37.018757   14730 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 17:40:37.037303   14730 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 17:40:37.037320   14730 start.go:475] detecting cgroup driver to use...
	I0229 17:40:37.037365   14730 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 17:40:37.055315   14730 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 17:40:37.070526   14730 docker.go:217] disabling cri-docker service (if available) ...
	I0229 17:40:37.070575   14730 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 17:40:37.085820   14730 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 17:40:37.101370   14730 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 17:40:37.221573   14730 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 17:40:37.381618   14730 docker.go:233] disabling docker service ...
	I0229 17:40:37.381688   14730 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 17:40:37.397510   14730 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 17:40:37.411618   14730 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 17:40:37.538043   14730 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 17:40:37.655584   14730 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 17:40:37.671693   14730 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 17:40:37.691697   14730 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0229 17:40:37.691746   14730 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 17:40:37.703550   14730 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0229 17:40:37.703598   14730 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 17:40:37.715468   14730 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 17:40:37.735363   14730 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 17:40:37.746215   14730 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 17:40:37.757211   14730 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 17:40:37.767272   14730 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 17:40:37.767335   14730 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0229 17:40:37.783248   14730 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 17:40:37.793506   14730 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 17:40:37.916226   14730 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0229 17:40:38.058970   14730 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0229 17:40:38.059060   14730 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0229 17:40:38.064211   14730 start.go:543] Will wait 60s for crictl version
	I0229 17:40:38.064276   14730 ssh_runner.go:195] Run: which crictl
	I0229 17:40:38.068121   14730 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 17:40:38.112651   14730 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0229 17:40:38.112768   14730 ssh_runner.go:195] Run: crio --version
	I0229 17:40:38.141470   14730 ssh_runner.go:195] Run: crio --version
	I0229 17:40:38.172877   14730 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0229 17:40:38.174211   14730 main.go:141] libmachine: (addons-848237) Calling .GetIP
	I0229 17:40:38.176936   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:40:38.177266   14730 main.go:141] libmachine: (addons-848237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:26:7d", ip: ""} in network mk-addons-848237: {Iface:virbr1 ExpiryTime:2024-02-29 18:40:28 +0000 UTC Type:0 Mac:52:54:00:08:26:7d Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-848237 Clientid:01:52:54:00:08:26:7d}
	I0229 17:40:38.177294   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined IP address 192.168.39.195 and MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:40:38.177492   14730 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0229 17:40:38.182278   14730 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 17:40:38.195545   14730 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0229 17:40:38.195598   14730 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 17:40:38.230085   14730 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0229 17:40:38.230156   14730 ssh_runner.go:195] Run: which lz4
	I0229 17:40:38.234468   14730 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0229 17:40:38.238820   14730 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 17:40:38.238837   14730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0229 17:40:39.937124   14730 crio.go:444] Took 1.702678 seconds to copy over tarball
	I0229 17:40:39.937222   14730 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 17:40:42.660498   14730 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.723220075s)
	I0229 17:40:42.660522   14730 crio.go:451] Took 2.723369 seconds to extract the tarball
	I0229 17:40:42.660531   14730 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 17:40:42.703970   14730 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 17:40:42.747795   14730 crio.go:496] all images are preloaded for cri-o runtime.
	I0229 17:40:42.747817   14730 cache_images.go:84] Images are preloaded, skipping loading
	I0229 17:40:42.747867   14730 ssh_runner.go:195] Run: crio config
	I0229 17:40:42.792255   14730 cni.go:84] Creating CNI manager for ""
	I0229 17:40:42.792280   14730 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 17:40:42.792305   14730 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 17:40:42.792340   14730 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.195 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-848237 NodeName:addons-848237 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.195"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.195 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 17:40:42.792507   14730 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.195
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-848237"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.195
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.195"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 17:40:42.792597   14730 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=addons-848237 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.195
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-848237 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 17:40:42.792658   14730 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0229 17:40:42.803921   14730 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 17:40:42.803971   14730 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 17:40:42.814775   14730 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I0229 17:40:42.832249   14730 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 17:40:42.849534   14730 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2100 bytes)
	I0229 17:40:42.866951   14730 ssh_runner.go:195] Run: grep 192.168.39.195	control-plane.minikube.internal$ /etc/hosts
	I0229 17:40:42.870955   14730 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.195	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 17:40:42.884027   14730 certs.go:56] Setting up /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/addons-848237 for IP: 192.168.39.195
	I0229 17:40:42.884052   14730 certs.go:190] acquiring lock for shared ca certs: {Name:mk3505d2468da66ac20dc3ebf913782cecf1ba0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 17:40:42.884181   14730 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/18259-6428/.minikube/ca.key
	I0229 17:40:42.970324   14730 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt ...
	I0229 17:40:42.970352   14730 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt: {Name:mkd356359de8a4829396567a8852b0f6512b0c4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 17:40:42.970976   14730 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18259-6428/.minikube/ca.key ...
	I0229 17:40:42.970991   14730 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/.minikube/ca.key: {Name:mkba0b2a328b2309d8c05dff20aa32db1f050bfc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 17:40:42.971127   14730 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.key
	I0229 17:40:43.065432   14730 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.crt ...
	I0229 17:40:43.065458   14730 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.crt: {Name:mk9ee79ac1bbed6b698d69e2007ba8c2a2170ac7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 17:40:43.065629   14730 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.key ...
	I0229 17:40:43.065642   14730 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.key: {Name:mk6d1d1e2e4dda7a844c51cc15bf6199f732c6cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 17:40:43.065764   14730 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/addons-848237/client.key
	I0229 17:40:43.065779   14730 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/addons-848237/client.crt with IP's: []
	I0229 17:40:43.180287   14730 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/addons-848237/client.crt ...
	I0229 17:40:43.180316   14730 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/addons-848237/client.crt: {Name:mkab157b40c85b23df16c4579d84fad2014e8a69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 17:40:43.180488   14730 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/addons-848237/client.key ...
	I0229 17:40:43.180501   14730 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/addons-848237/client.key: {Name:mkb098e831fd60d0fff4d410a338e179c289e4a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 17:40:43.180593   14730 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/addons-848237/apiserver.key.d0c1bc37
	I0229 17:40:43.180611   14730 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/addons-848237/apiserver.crt.d0c1bc37 with IP's: [192.168.39.195 10.96.0.1 127.0.0.1 10.0.0.1]
	I0229 17:40:43.287340   14730 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/addons-848237/apiserver.crt.d0c1bc37 ...
	I0229 17:40:43.287366   14730 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/addons-848237/apiserver.crt.d0c1bc37: {Name:mk673afdbb39dc11fce2c9f7f1e561d6c9e10457 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 17:40:43.287538   14730 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/addons-848237/apiserver.key.d0c1bc37 ...
	I0229 17:40:43.287553   14730 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/addons-848237/apiserver.key.d0c1bc37: {Name:mkd4504fd67791363fd1cbcbc35d6a28fdb45324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 17:40:43.287642   14730 certs.go:337] copying /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/addons-848237/apiserver.crt.d0c1bc37 -> /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/addons-848237/apiserver.crt
	I0229 17:40:43.287732   14730 certs.go:341] copying /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/addons-848237/apiserver.key.d0c1bc37 -> /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/addons-848237/apiserver.key
	I0229 17:40:43.287785   14730 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/addons-848237/proxy-client.key
	I0229 17:40:43.287801   14730 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/addons-848237/proxy-client.crt with IP's: []
	I0229 17:40:43.579637   14730 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/addons-848237/proxy-client.crt ...
	I0229 17:40:43.579668   14730 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/addons-848237/proxy-client.crt: {Name:mk3f4ac2e3448abc7e7c3af470613e2b2a753fb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 17:40:43.579848   14730 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/addons-848237/proxy-client.key ...
	I0229 17:40:43.579861   14730 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/addons-848237/proxy-client.key: {Name:mkba98a3517e080309c1a73d858601cd4d151565 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 17:40:43.580050   14730 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem (1675 bytes)
	I0229 17:40:43.580096   14730 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem (1082 bytes)
	I0229 17:40:43.580132   14730 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem (1123 bytes)
	I0229 17:40:43.580167   14730 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem (1675 bytes)
	I0229 17:40:43.580757   14730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/addons-848237/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 17:40:43.612221   14730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/addons-848237/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0229 17:40:43.640566   14730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/addons-848237/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 17:40:43.668424   14730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/addons-848237/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0229 17:40:43.695915   14730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 17:40:43.722269   14730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0229 17:40:43.749021   14730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 17:40:43.776793   14730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0229 17:40:43.805428   14730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 17:40:43.834074   14730 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 17:40:43.854781   14730 ssh_runner.go:195] Run: openssl version
	I0229 17:40:43.861472   14730 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 17:40:43.875932   14730 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 17:40:43.881248   14730 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 17:40 /usr/share/ca-certificates/minikubeCA.pem
	I0229 17:40:43.881302   14730 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 17:40:43.887919   14730 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 17:40:43.902323   14730 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 17:40:43.907247   14730 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 17:40:43.907291   14730 kubeadm.go:404] StartCluster: {Name:addons-848237 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:addons-848237 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 17:40:43.907356   14730 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0229 17:40:43.907411   14730 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 17:40:43.957718   14730 cri.go:89] found id: ""
	I0229 17:40:43.957781   14730 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 17:40:43.970888   14730 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 17:40:43.983330   14730 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 17:40:43.995798   14730 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 17:40:43.995827   14730 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0229 17:40:44.056955   14730 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0229 17:40:44.057046   14730 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 17:40:44.203571   14730 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 17:40:44.203725   14730 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 17:40:44.203820   14730 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 17:40:44.434283   14730 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 17:40:44.647119   14730 out.go:204]   - Generating certificates and keys ...
	I0229 17:40:44.647246   14730 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 17:40:44.647351   14730 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 17:40:44.647451   14730 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0229 17:40:44.853274   14730 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0229 17:40:45.119513   14730 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0229 17:40:45.222366   14730 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0229 17:40:45.640558   14730 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0229 17:40:45.640782   14730 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-848237 localhost] and IPs [192.168.39.195 127.0.0.1 ::1]
	I0229 17:40:45.826294   14730 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0229 17:40:45.826478   14730 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-848237 localhost] and IPs [192.168.39.195 127.0.0.1 ::1]
	I0229 17:40:46.086932   14730 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0229 17:40:46.339705   14730 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0229 17:40:46.531402   14730 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0229 17:40:46.531514   14730 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 17:40:46.763733   14730 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 17:40:47.022940   14730 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 17:40:47.383703   14730 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 17:40:47.441416   14730 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 17:40:47.442039   14730 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 17:40:47.445775   14730 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 17:40:47.447730   14730 out.go:204]   - Booting up control plane ...
	I0229 17:40:47.447853   14730 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 17:40:47.447970   14730 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 17:40:47.448373   14730 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 17:40:47.469341   14730 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 17:40:47.469480   14730 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 17:40:47.469553   14730 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0229 17:40:47.595937   14730 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 17:40:53.597446   14730 kubeadm.go:322] [apiclient] All control plane components are healthy after 6.002821 seconds
	I0229 17:40:53.597581   14730 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0229 17:40:53.614905   14730 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0229 17:40:54.145993   14730 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0229 17:40:54.146183   14730 kubeadm.go:322] [mark-control-plane] Marking the node addons-848237 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0229 17:40:54.680422   14730 kubeadm.go:322] [bootstrap-token] Using token: uzz64p.1swubqywwot683h3
	I0229 17:40:54.681852   14730 out.go:204]   - Configuring RBAC rules ...
	I0229 17:40:54.681989   14730 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0229 17:40:54.693464   14730 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0229 17:40:54.705846   14730 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0229 17:40:54.711845   14730 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0229 17:40:54.727686   14730 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0229 17:40:54.732757   14730 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0229 17:40:54.751939   14730 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0229 17:40:54.991152   14730 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0229 17:40:55.110590   14730 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0229 17:40:55.110612   14730 kubeadm.go:322] 
	I0229 17:40:55.110683   14730 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0229 17:40:55.110699   14730 kubeadm.go:322] 
	I0229 17:40:55.110797   14730 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0229 17:40:55.110812   14730 kubeadm.go:322] 
	I0229 17:40:55.110834   14730 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0229 17:40:55.110906   14730 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0229 17:40:55.110991   14730 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0229 17:40:55.111013   14730 kubeadm.go:322] 
	I0229 17:40:55.111097   14730 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0229 17:40:55.111108   14730 kubeadm.go:322] 
	I0229 17:40:55.111175   14730 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0229 17:40:55.111185   14730 kubeadm.go:322] 
	I0229 17:40:55.111259   14730 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0229 17:40:55.111355   14730 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0229 17:40:55.111440   14730 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0229 17:40:55.111450   14730 kubeadm.go:322] 
	I0229 17:40:55.111551   14730 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0229 17:40:55.111672   14730 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0229 17:40:55.111684   14730 kubeadm.go:322] 
	I0229 17:40:55.111807   14730 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token uzz64p.1swubqywwot683h3 \
	I0229 17:40:55.111938   14730 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:aeecb2c9da879b7c90530698bac219eb9b0e3de8de59b6b65ba867f984e81782 \
	I0229 17:40:55.111976   14730 kubeadm.go:322] 	--control-plane 
	I0229 17:40:55.111986   14730 kubeadm.go:322] 
	I0229 17:40:55.112099   14730 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0229 17:40:55.112113   14730 kubeadm.go:322] 
	I0229 17:40:55.112230   14730 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token uzz64p.1swubqywwot683h3 \
	I0229 17:40:55.112372   14730 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:aeecb2c9da879b7c90530698bac219eb9b0e3de8de59b6b65ba867f984e81782 
	I0229 17:40:55.112547   14730 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 17:40:55.112567   14730 cni.go:84] Creating CNI manager for ""
	I0229 17:40:55.112577   14730 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 17:40:55.114361   14730 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 17:40:55.115667   14730 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 17:40:55.175777   14730 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 17:40:55.222714   14730 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 17:40:55.222819   14730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 17:40:55.222837   14730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f9e09574fdbf0719156a1e892f7aeb8b71f0cf19 minikube.k8s.io/name=addons-848237 minikube.k8s.io/updated_at=2024_02_29T17_40_55_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 17:40:55.394252   14730 ops.go:34] apiserver oom_adj: -16
	I0229 17:40:55.436139   14730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 17:40:55.936740   14730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 17:40:56.436388   14730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 17:40:56.936794   14730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 17:40:57.437086   14730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 17:40:57.936682   14730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 17:40:58.436613   14730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 17:40:58.936248   14730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 17:40:59.436561   14730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 17:40:59.936206   14730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 17:41:00.437059   14730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 17:41:00.936430   14730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 17:41:01.437088   14730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 17:41:01.936466   14730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 17:41:02.436954   14730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 17:41:02.936379   14730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 17:41:03.437199   14730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 17:41:03.936830   14730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 17:41:04.436792   14730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 17:41:04.936269   14730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 17:41:05.436352   14730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 17:41:05.936210   14730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 17:41:06.436559   14730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 17:41:06.578870   14730 kubeadm.go:1088] duration metric: took 11.35612152s to wait for elevateKubeSystemPrivileges.
	I0229 17:41:06.578916   14730 kubeadm.go:406] StartCluster complete in 22.671627437s
	I0229 17:41:06.578937   14730 settings.go:142] acquiring lock: {Name:mk2120f70b8c0f8e9d58905a579415af500b3723 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 17:41:06.579076   14730 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18259-6428/kubeconfig
	I0229 17:41:06.579430   14730 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/kubeconfig: {Name:mk7125f243525b7f0feb85371d9c568ed8e0cf7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 17:41:06.579620   14730 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 17:41:06.579691   14730 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0229 17:41:06.579762   14730 addons.go:69] Setting yakd=true in profile "addons-848237"
	I0229 17:41:06.579785   14730 addons.go:234] Setting addon yakd=true in "addons-848237"
	I0229 17:41:06.579787   14730 addons.go:69] Setting ingress-dns=true in profile "addons-848237"
	I0229 17:41:06.579812   14730 addons.go:234] Setting addon ingress-dns=true in "addons-848237"
	I0229 17:41:06.579818   14730 addons.go:69] Setting registry=true in profile "addons-848237"
	I0229 17:41:06.579827   14730 host.go:66] Checking if "addons-848237" exists ...
	I0229 17:41:06.579831   14730 addons.go:69] Setting cloud-spanner=true in profile "addons-848237"
	I0229 17:41:06.579838   14730 addons.go:234] Setting addon registry=true in "addons-848237"
	I0229 17:41:06.579850   14730 addons.go:234] Setting addon cloud-spanner=true in "addons-848237"
	I0229 17:41:06.579835   14730 addons.go:69] Setting metrics-server=true in profile "addons-848237"
	I0229 17:41:06.579861   14730 host.go:66] Checking if "addons-848237" exists ...
	I0229 17:41:06.579869   14730 config.go:182] Loaded profile config "addons-848237": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 17:41:06.579871   14730 addons.go:234] Setting addon metrics-server=true in "addons-848237"
	I0229 17:41:06.579874   14730 host.go:66] Checking if "addons-848237" exists ...
	I0229 17:41:06.579893   14730 host.go:66] Checking if "addons-848237" exists ...
	I0229 17:41:06.579874   14730 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-848237"
	I0229 17:41:06.579910   14730 addons.go:69] Setting helm-tiller=true in profile "addons-848237"
	I0229 17:41:06.579913   14730 host.go:66] Checking if "addons-848237" exists ...
	I0229 17:41:06.579922   14730 addons.go:234] Setting addon helm-tiller=true in "addons-848237"
	I0229 17:41:06.579933   14730 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-848237"
	I0229 17:41:06.579951   14730 host.go:66] Checking if "addons-848237" exists ...
	I0229 17:41:06.579971   14730 host.go:66] Checking if "addons-848237" exists ...
	I0229 17:41:06.580285   14730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 17:41:06.580326   14730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 17:41:06.580331   14730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 17:41:06.580337   14730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 17:41:06.580356   14730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 17:41:06.580360   14730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 17:41:06.580365   14730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 17:41:06.580390   14730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 17:41:06.580395   14730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 17:41:06.580415   14730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 17:41:06.580421   14730 addons.go:69] Setting gcp-auth=true in profile "addons-848237"
	I0229 17:41:06.580428   14730 addons.go:69] Setting storage-provisioner=true in profile "addons-848237"
	I0229 17:41:06.580438   14730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 17:41:06.580447   14730 addons.go:69] Setting ingress=true in profile "addons-848237"
	I0229 17:41:06.580447   14730 addons.go:69] Setting volumesnapshots=true in profile "addons-848237"
	I0229 17:41:06.580457   14730 addons.go:234] Setting addon ingress=true in "addons-848237"
	I0229 17:41:06.580460   14730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 17:41:06.580463   14730 addons.go:234] Setting addon volumesnapshots=true in "addons-848237"
	I0229 17:41:06.580415   14730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 17:41:06.580441   14730 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-848237"
	I0229 17:41:06.580505   14730 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-848237"
	I0229 17:41:06.580416   14730 addons.go:69] Setting inspektor-gadget=true in profile "addons-848237"
	I0229 17:41:06.580534   14730 addons.go:234] Setting addon inspektor-gadget=true in "addons-848237"
	I0229 17:41:06.580566   14730 host.go:66] Checking if "addons-848237" exists ...
	I0229 17:41:06.580423   14730 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-848237"
	I0229 17:41:06.580608   14730 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-848237"
	I0229 17:41:06.580895   14730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 17:41:06.580934   14730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 17:41:06.580935   14730 host.go:66] Checking if "addons-848237" exists ...
	I0229 17:41:06.580900   14730 host.go:66] Checking if "addons-848237" exists ...
	I0229 17:41:06.580992   14730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 17:41:06.581011   14730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 17:41:06.580440   14730 addons.go:234] Setting addon storage-provisioner=true in "addons-848237"
	I0229 17:41:06.581191   14730 host.go:66] Checking if "addons-848237" exists ...
	I0229 17:41:06.581281   14730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 17:41:06.579816   14730 addons.go:69] Setting default-storageclass=true in profile "addons-848237"
	I0229 17:41:06.581318   14730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 17:41:06.581323   14730 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-848237"
	I0229 17:41:06.581366   14730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 17:41:06.581386   14730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 17:41:06.580446   14730 mustload.go:65] Loading cluster: addons-848237
	I0229 17:41:06.581551   14730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 17:41:06.581578   14730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 17:41:06.581640   14730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 17:41:06.581658   14730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 17:41:06.580486   14730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 17:41:06.582251   14730 host.go:66] Checking if "addons-848237" exists ...
	I0229 17:41:06.582599   14730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 17:41:06.582617   14730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 17:41:06.601379   14730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38059
	I0229 17:41:06.601476   14730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45681
	I0229 17:41:06.602373   14730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38581
	I0229 17:41:06.602708   14730 main.go:141] libmachine: () Calling .GetVersion
	I0229 17:41:06.602904   14730 main.go:141] libmachine: () Calling .GetVersion
	I0229 17:41:06.603294   14730 main.go:141] libmachine: Using API Version  1
	I0229 17:41:06.603309   14730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 17:41:06.603351   14730 main.go:141] libmachine: Using API Version  1
	I0229 17:41:06.603367   14730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 17:41:06.603702   14730 main.go:141] libmachine: () Calling .GetMachineName
	I0229 17:41:06.604144   14730 main.go:141] libmachine: (addons-848237) Calling .GetState
	I0229 17:41:06.604251   14730 main.go:141] libmachine: () Calling .GetMachineName
	I0229 17:41:06.604336   14730 main.go:141] libmachine: () Calling .GetVersion
	I0229 17:41:06.604787   14730 main.go:141] libmachine: Using API Version  1
	I0229 17:41:06.604800   14730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 17:41:06.604911   14730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35575
	I0229 17:41:06.605183   14730 main.go:141] libmachine: () Calling .GetMachineName
	I0229 17:41:06.605207   14730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 17:41:06.605231   14730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 17:41:06.605249   14730 main.go:141] libmachine: () Calling .GetVersion
	I0229 17:41:06.605867   14730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 17:41:06.605916   14730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 17:41:06.606597   14730 main.go:141] libmachine: Using API Version  1
	I0229 17:41:06.606619   14730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 17:41:06.607365   14730 main.go:141] libmachine: () Calling .GetMachineName
	I0229 17:41:06.607675   14730 config.go:182] Loaded profile config "addons-848237": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 17:41:06.607900   14730 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-848237"
	I0229 17:41:06.607932   14730 host.go:66] Checking if "addons-848237" exists ...
	I0229 17:41:06.607999   14730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 17:41:06.608047   14730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 17:41:06.608299   14730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 17:41:06.608316   14730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 17:41:06.608997   14730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 17:41:06.609039   14730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 17:41:06.611508   14730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36521
	I0229 17:41:06.612004   14730 main.go:141] libmachine: () Calling .GetVersion
	I0229 17:41:06.612500   14730 main.go:141] libmachine: Using API Version  1
	I0229 17:41:06.612516   14730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 17:41:06.612843   14730 main.go:141] libmachine: () Calling .GetMachineName
	I0229 17:41:06.613345   14730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 17:41:06.613377   14730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 17:41:06.621344   14730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44513
	I0229 17:41:06.624265   14730 main.go:141] libmachine: () Calling .GetVersion
	I0229 17:41:06.626520   14730 main.go:141] libmachine: Using API Version  1
	I0229 17:41:06.626539   14730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 17:41:06.626569   14730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33445
	I0229 17:41:06.627086   14730 main.go:141] libmachine: () Calling .GetMachineName
	I0229 17:41:06.627222   14730 main.go:141] libmachine: () Calling .GetVersion
	I0229 17:41:06.627715   14730 main.go:141] libmachine: Using API Version  1
	I0229 17:41:06.627734   14730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 17:41:06.628067   14730 main.go:141] libmachine: () Calling .GetMachineName
	I0229 17:41:06.628481   14730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 17:41:06.628514   14730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 17:41:06.629138   14730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46271
	I0229 17:41:06.629466   14730 main.go:141] libmachine: () Calling .GetVersion
	I0229 17:41:06.629534   14730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 17:41:06.629549   14730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 17:41:06.629898   14730 main.go:141] libmachine: Using API Version  1
	I0229 17:41:06.629926   14730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 17:41:06.630226   14730 main.go:141] libmachine: () Calling .GetMachineName
	I0229 17:41:06.631283   14730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 17:41:06.631322   14730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 17:41:06.633520   14730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39775
	I0229 17:41:06.633879   14730 main.go:141] libmachine: () Calling .GetVersion
	I0229 17:41:06.634229   14730 main.go:141] libmachine: Using API Version  1
	I0229 17:41:06.634242   14730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 17:41:06.634521   14730 main.go:141] libmachine: () Calling .GetMachineName
	I0229 17:41:06.634697   14730 main.go:141] libmachine: (addons-848237) Calling .GetState
	I0229 17:41:06.641717   14730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42235
	I0229 17:41:06.642183   14730 main.go:141] libmachine: () Calling .GetVersion
	I0229 17:41:06.642376   14730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44203
	I0229 17:41:06.642724   14730 main.go:141] libmachine: () Calling .GetVersion
	I0229 17:41:06.643101   14730 main.go:141] libmachine: Using API Version  1
	I0229 17:41:06.643117   14730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 17:41:06.643995   14730 main.go:141] libmachine: () Calling .GetMachineName
	I0229 17:41:06.644579   14730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 17:41:06.644619   14730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 17:41:06.644977   14730 main.go:141] libmachine: Using API Version  1
	I0229 17:41:06.644992   14730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 17:41:06.645008   14730 main.go:141] libmachine: (addons-848237) Calling .DriverName
	I0229 17:41:06.645232   14730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41161
	I0229 17:41:06.647270   14730 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.4
	I0229 17:41:06.645359   14730 main.go:141] libmachine: () Calling .GetMachineName
	I0229 17:41:06.645552   14730 main.go:141] libmachine: () Calling .GetVersion
	I0229 17:41:06.648396   14730 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0229 17:41:06.648413   14730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0229 17:41:06.648431   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHHostname
	I0229 17:41:06.649207   14730 main.go:141] libmachine: Using API Version  1
	I0229 17:41:06.649231   14730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 17:41:06.649952   14730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36031
	I0229 17:41:06.649972   14730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 17:41:06.649995   14730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 17:41:06.650085   14730 main.go:141] libmachine: () Calling .GetMachineName
	I0229 17:41:06.650309   14730 main.go:141] libmachine: () Calling .GetVersion
	I0229 17:41:06.650833   14730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 17:41:06.650865   14730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 17:41:06.651203   14730 main.go:141] libmachine: Using API Version  1
	I0229 17:41:06.651220   14730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 17:41:06.651505   14730 main.go:141] libmachine: () Calling .GetMachineName
	I0229 17:41:06.651622   14730 main.go:141] libmachine: (addons-848237) Calling .GetState
	I0229 17:41:06.653270   14730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45217
	I0229 17:41:06.653292   14730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36855
	I0229 17:41:06.653815   14730 main.go:141] libmachine: () Calling .GetVersion
	I0229 17:41:06.654288   14730 main.go:141] libmachine: Using API Version  1
	I0229 17:41:06.654305   14730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 17:41:06.654371   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:41:06.654668   14730 main.go:141] libmachine: (addons-848237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:26:7d", ip: ""} in network mk-addons-848237: {Iface:virbr1 ExpiryTime:2024-02-29 18:40:28 +0000 UTC Type:0 Mac:52:54:00:08:26:7d Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-848237 Clientid:01:52:54:00:08:26:7d}
	I0229 17:41:06.654696   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined IP address 192.168.39.195 and MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:41:06.654732   14730 main.go:141] libmachine: () Calling .GetMachineName
	I0229 17:41:06.655110   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHPort
	I0229 17:41:06.655258   14730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 17:41:06.655276   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHKeyPath
	I0229 17:41:06.655295   14730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 17:41:06.655389   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHUsername
	I0229 17:41:06.655432   14730 main.go:141] libmachine: (addons-848237) Calling .DriverName
	I0229 17:41:06.655631   14730 main.go:141] libmachine: () Calling .GetVersion
	I0229 17:41:06.657230   14730 out.go:177]   - Using image docker.io/registry:2.8.3
	I0229 17:41:06.655749   14730 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/addons-848237/id_rsa Username:docker}
	I0229 17:41:06.656294   14730 main.go:141] libmachine: Using API Version  1
	I0229 17:41:06.658503   14730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 17:41:06.660058   14730 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0229 17:41:06.661182   14730 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0229 17:41:06.661198   14730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0229 17:41:06.661217   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHHostname
	I0229 17:41:06.659492   14730 main.go:141] libmachine: () Calling .GetMachineName
	I0229 17:41:06.660350   14730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46455
	I0229 17:41:06.662125   14730 main.go:141] libmachine: () Calling .GetVersion
	I0229 17:41:06.662570   14730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 17:41:06.662599   14730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 17:41:06.663290   14730 main.go:141] libmachine: Using API Version  1
	I0229 17:41:06.663312   14730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 17:41:06.663330   14730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45853
	I0229 17:41:06.663649   14730 main.go:141] libmachine: () Calling .GetVersion
	I0229 17:41:06.664083   14730 main.go:141] libmachine: () Calling .GetMachineName
	I0229 17:41:06.664833   14730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39787
	I0229 17:41:06.664949   14730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45227
	I0229 17:41:06.665237   14730 main.go:141] libmachine: Using API Version  1
	I0229 17:41:06.665250   14730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 17:41:06.665312   14730 main.go:141] libmachine: () Calling .GetVersion
	I0229 17:41:06.665326   14730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45497
	I0229 17:41:06.665653   14730 main.go:141] libmachine: () Calling .GetMachineName
	I0229 17:41:06.665714   14730 main.go:141] libmachine: (addons-848237) Calling .GetState
	I0229 17:41:06.665730   14730 main.go:141] libmachine: Using API Version  1
	I0229 17:41:06.665754   14730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 17:41:06.665793   14730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40449
	I0229 17:41:06.666105   14730 main.go:141] libmachine: () Calling .GetMachineName
	I0229 17:41:06.666156   14730 main.go:141] libmachine: (addons-848237) Calling .GetState
	I0229 17:41:06.666355   14730 main.go:141] libmachine: (addons-848237) Calling .GetState
	I0229 17:41:06.666411   14730 main.go:141] libmachine: () Calling .GetVersion
	I0229 17:41:06.667364   14730 main.go:141] libmachine: () Calling .GetVersion
	I0229 17:41:06.667453   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:41:06.667711   14730 main.go:141] libmachine: Using API Version  1
	I0229 17:41:06.667729   14730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 17:41:06.668028   14730 main.go:141] libmachine: (addons-848237) Calling .DriverName
	I0229 17:41:06.668105   14730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43201
	I0229 17:41:06.668249   14730 main.go:141] libmachine: () Calling .GetMachineName
	I0229 17:41:06.670239   14730 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.14
	I0229 17:41:06.671427   14730 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0229 17:41:06.671439   14730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0229 17:41:06.671451   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHHostname
	I0229 17:41:06.670240   14730 main.go:141] libmachine: Using API Version  1
	I0229 17:41:06.671507   14730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 17:41:06.669025   14730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 17:41:06.671560   14730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 17:41:06.671997   14730 main.go:141] libmachine: () Calling .GetVersion
	I0229 17:41:06.669264   14730 main.go:141] libmachine: (addons-848237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:26:7d", ip: ""} in network mk-addons-848237: {Iface:virbr1 ExpiryTime:2024-02-29 18:40:28 +0000 UTC Type:0 Mac:52:54:00:08:26:7d Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-848237 Clientid:01:52:54:00:08:26:7d}
	I0229 17:41:06.669483   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHPort
	I0229 17:41:06.668489   14730 main.go:141] libmachine: () Calling .GetVersion
	I0229 17:41:06.671001   14730 host.go:66] Checking if "addons-848237" exists ...
	I0229 17:41:06.672037   14730 main.go:141] libmachine: (addons-848237) Calling .DriverName
	I0229 17:41:06.672091   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined IP address 192.168.39.195 and MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:41:06.673549   14730 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0229 17:41:06.672334   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHKeyPath
	I0229 17:41:06.672764   14730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 17:41:06.673216   14730 main.go:141] libmachine: Using API Version  1
	I0229 17:41:06.673493   14730 main.go:141] libmachine: () Calling .GetMachineName
	I0229 17:41:06.674700   14730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 17:41:06.674720   14730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 17:41:06.674704   14730 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0229 17:41:06.674845   14730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0229 17:41:06.674861   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHHostname
	I0229 17:41:06.675051   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHUsername
	I0229 17:41:06.675097   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:41:06.675214   14730 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/addons-848237/id_rsa Username:docker}
	I0229 17:41:06.675559   14730 main.go:141] libmachine: Using API Version  1
	I0229 17:41:06.675572   14730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 17:41:06.675923   14730 main.go:141] libmachine: () Calling .GetMachineName
	I0229 17:41:06.676128   14730 main.go:141] libmachine: (addons-848237) Calling .GetState
	I0229 17:41:06.676239   14730 main.go:141] libmachine: () Calling .GetMachineName
	I0229 17:41:06.676410   14730 main.go:141] libmachine: (addons-848237) Calling .GetState
	I0229 17:41:06.677363   14730 main.go:141] libmachine: (addons-848237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:26:7d", ip: ""} in network mk-addons-848237: {Iface:virbr1 ExpiryTime:2024-02-29 18:40:28 +0000 UTC Type:0 Mac:52:54:00:08:26:7d Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-848237 Clientid:01:52:54:00:08:26:7d}
	I0229 17:41:06.677380   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined IP address 192.168.39.195 and MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:41:06.677427   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHPort
	I0229 17:41:06.677612   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHKeyPath
	I0229 17:41:06.677763   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHUsername
	I0229 17:41:06.677929   14730 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/addons-848237/id_rsa Username:docker}
	I0229 17:41:06.678560   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:41:06.678788   14730 main.go:141] libmachine: (addons-848237) Calling .GetState
	I0229 17:41:06.678889   14730 main.go:141] libmachine: (addons-848237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:26:7d", ip: ""} in network mk-addons-848237: {Iface:virbr1 ExpiryTime:2024-02-29 18:40:28 +0000 UTC Type:0 Mac:52:54:00:08:26:7d Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-848237 Clientid:01:52:54:00:08:26:7d}
	I0229 17:41:06.678940   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined IP address 192.168.39.195 and MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:41:06.679241   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHPort
	I0229 17:41:06.679436   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHKeyPath
	I0229 17:41:06.679670   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHUsername
	I0229 17:41:06.679827   14730 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/addons-848237/id_rsa Username:docker}
	I0229 17:41:06.680417   14730 main.go:141] libmachine: (addons-848237) Calling .DriverName
	I0229 17:41:06.682099   14730 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.0
	I0229 17:41:06.680790   14730 main.go:141] libmachine: (addons-848237) Calling .DriverName
	I0229 17:41:06.682590   14730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43227
	I0229 17:41:06.683507   14730 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0229 17:41:06.683519   14730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0229 17:41:06.683536   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHHostname
	I0229 17:41:06.683812   14730 addons.go:234] Setting addon default-storageclass=true in "addons-848237"
	I0229 17:41:06.683852   14730 host.go:66] Checking if "addons-848237" exists ...
	I0229 17:41:06.684438   14730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 17:41:06.685752   14730 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.25.1
	I0229 17:41:06.684482   14730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 17:41:06.684893   14730 main.go:141] libmachine: () Calling .GetVersion
	I0229 17:41:06.686926   14730 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0229 17:41:06.686938   14730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0229 17:41:06.686957   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHHostname
	I0229 17:41:06.688096   14730 main.go:141] libmachine: Using API Version  1
	I0229 17:41:06.688114   14730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 17:41:06.688902   14730 main.go:141] libmachine: () Calling .GetMachineName
	I0229 17:41:06.689488   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:41:06.690041   14730 main.go:141] libmachine: (addons-848237) Calling .GetState
	I0229 17:41:06.690798   14730 main.go:141] libmachine: (addons-848237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:26:7d", ip: ""} in network mk-addons-848237: {Iface:virbr1 ExpiryTime:2024-02-29 18:40:28 +0000 UTC Type:0 Mac:52:54:00:08:26:7d Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-848237 Clientid:01:52:54:00:08:26:7d}
	I0229 17:41:06.690819   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined IP address 192.168.39.195 and MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:41:06.691299   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHPort
	I0229 17:41:06.691597   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHKeyPath
	I0229 17:41:06.691666   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:41:06.691923   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHUsername
	I0229 17:41:06.691995   14730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45411
	I0229 17:41:06.692155   14730 main.go:141] libmachine: (addons-848237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:26:7d", ip: ""} in network mk-addons-848237: {Iface:virbr1 ExpiryTime:2024-02-29 18:40:28 +0000 UTC Type:0 Mac:52:54:00:08:26:7d Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-848237 Clientid:01:52:54:00:08:26:7d}
	I0229 17:41:06.692170   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined IP address 192.168.39.195 and MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:41:06.692321   14730 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/addons-848237/id_rsa Username:docker}
	I0229 17:41:06.692569   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHPort
	I0229 17:41:06.692639   14730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39029
	I0229 17:41:06.692735   14730 main.go:141] libmachine: () Calling .GetVersion
	I0229 17:41:06.692813   14730 main.go:141] libmachine: (addons-848237) Calling .DriverName
	I0229 17:41:06.692894   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHKeyPath
	I0229 17:41:06.693225   14730 main.go:141] libmachine: () Calling .GetVersion
	I0229 17:41:06.693298   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHUsername
	I0229 17:41:06.693306   14730 main.go:141] libmachine: Using API Version  1
	I0229 17:41:06.693320   14730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 17:41:06.694937   14730 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0229 17:41:06.693696   14730 main.go:141] libmachine: () Calling .GetMachineName
	I0229 17:41:06.693717   14730 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/addons-848237/id_rsa Username:docker}
	I0229 17:41:06.693904   14730 main.go:141] libmachine: Using API Version  1
	I0229 17:41:06.696259   14730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 17:41:06.696407   14730 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0229 17:41:06.696417   14730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0229 17:41:06.696431   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHHostname
	I0229 17:41:06.696907   14730 main.go:141] libmachine: (addons-848237) Calling .GetState
	I0229 17:41:06.696987   14730 main.go:141] libmachine: () Calling .GetMachineName
	I0229 17:41:06.697215   14730 main.go:141] libmachine: (addons-848237) Calling .GetState
	I0229 17:41:06.699794   14730 main.go:141] libmachine: (addons-848237) Calling .DriverName
	I0229 17:41:06.699994   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:41:06.701495   14730 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0229 17:41:06.700711   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHPort
	I0229 17:41:06.700805   14730 main.go:141] libmachine: (addons-848237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:26:7d", ip: ""} in network mk-addons-848237: {Iface:virbr1 ExpiryTime:2024-02-29 18:40:28 +0000 UTC Type:0 Mac:52:54:00:08:26:7d Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-848237 Clientid:01:52:54:00:08:26:7d}
	I0229 17:41:06.700866   14730 main.go:141] libmachine: (addons-848237) Calling .DriverName
	I0229 17:41:06.702869   14730 out.go:177]   - Using image docker.io/busybox:stable
	I0229 17:41:06.704030   14730 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0229 17:41:06.704048   14730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0229 17:41:06.704064   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHHostname
	I0229 17:41:06.702965   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined IP address 192.168.39.195 and MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:41:06.702977   14730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45241
	I0229 17:41:06.703201   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHKeyPath
	I0229 17:41:06.705303   14730 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0229 17:41:06.704791   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHUsername
	I0229 17:41:06.705131   14730 main.go:141] libmachine: () Calling .GetVersion
	I0229 17:41:06.705692   14730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37923
	I0229 17:41:06.706360   14730 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0229 17:41:06.705737   14730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41247
	I0229 17:41:06.706695   14730 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/addons-848237/id_rsa Username:docker}
	I0229 17:41:06.707639   14730 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0229 17:41:06.706744   14730 main.go:141] libmachine: () Calling .GetVersion
	I0229 17:41:06.707192   14730 main.go:141] libmachine: Using API Version  1
	I0229 17:41:06.707459   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:41:06.708283   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHPort
	I0229 17:41:06.708481   14730 main.go:141] libmachine: () Calling .GetVersion
	I0229 17:41:06.710235   14730 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0229 17:41:06.709186   14730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 17:41:06.709214   14730 main.go:141] libmachine: (addons-848237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:26:7d", ip: ""} in network mk-addons-848237: {Iface:virbr1 ExpiryTime:2024-02-29 18:40:28 +0000 UTC Type:0 Mac:52:54:00:08:26:7d Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-848237 Clientid:01:52:54:00:08:26:7d}
	I0229 17:41:06.709332   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHKeyPath
	I0229 17:41:06.709433   14730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42057
	I0229 17:41:06.709640   14730 main.go:141] libmachine: Using API Version  1
	I0229 17:41:06.709862   14730 main.go:141] libmachine: Using API Version  1
	I0229 17:41:06.711290   14730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 17:41:06.712827   14730 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0229 17:41:06.711402   14730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 17:41:06.711416   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined IP address 192.168.39.195 and MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:41:06.711837   14730 main.go:141] libmachine: () Calling .GetMachineName
	I0229 17:41:06.711871   14730 main.go:141] libmachine: () Calling .GetMachineName
	I0229 17:41:06.711879   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHUsername
	I0229 17:41:06.711899   14730 main.go:141] libmachine: () Calling .GetVersion
	I0229 17:41:06.713101   14730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39921
	I0229 17:41:06.713113   14730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39561
	I0229 17:41:06.715398   14730 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0229 17:41:06.714563   14730 main.go:141] libmachine: (addons-848237) Calling .DriverName
	I0229 17:41:06.716536   14730 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0229 17:41:06.714598   14730 main.go:141] libmachine: () Calling .GetMachineName
	I0229 17:41:06.714700   14730 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/addons-848237/id_rsa Username:docker}
	I0229 17:41:06.714713   14730 main.go:141] libmachine: Using API Version  1
	I0229 17:41:06.714802   14730 main.go:141] libmachine: () Calling .GetVersion
	I0229 17:41:06.714835   14730 main.go:141] libmachine: () Calling .GetVersion
	I0229 17:41:06.714573   14730 main.go:141] libmachine: (addons-848237) Calling .GetState
	I0229 17:41:06.719201   14730 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0229 17:41:06.717949   14730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 17:41:06.718122   14730 main.go:141] libmachine: (addons-848237) Calling .GetState
	I0229 17:41:06.718837   14730 main.go:141] libmachine: Using API Version  1
	I0229 17:41:06.719146   14730 main.go:141] libmachine: Using API Version  1
	I0229 17:41:06.719816   14730 main.go:141] libmachine: (addons-848237) Calling .DriverName
	I0229 17:41:06.720444   14730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 17:41:06.720479   14730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 17:41:06.720492   14730 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0229 17:41:06.720505   14730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0229 17:41:06.720525   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHHostname
	I0229 17:41:06.721034   14730 main.go:141] libmachine: () Calling .GetMachineName
	I0229 17:41:06.721048   14730 main.go:141] libmachine: () Calling .GetMachineName
	I0229 17:41:06.721039   14730 main.go:141] libmachine: () Calling .GetMachineName
	I0229 17:41:06.722137   14730 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0229 17:41:06.723200   14730 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0229 17:41:06.723213   14730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0229 17:41:06.723224   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHHostname
	I0229 17:41:06.722157   14730 main.go:141] libmachine: (addons-848237) Calling .DriverName
	I0229 17:41:06.721497   14730 main.go:141] libmachine: (addons-848237) Calling .GetState
	I0229 17:41:06.721606   14730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 17:41:06.723346   14730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 17:41:06.723562   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:41:06.721394   14730 main.go:141] libmachine: (addons-848237) Calling .GetState
	I0229 17:41:06.724042   14730 main.go:141] libmachine: (addons-848237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:26:7d", ip: ""} in network mk-addons-848237: {Iface:virbr1 ExpiryTime:2024-02-29 18:40:28 +0000 UTC Type:0 Mac:52:54:00:08:26:7d Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-848237 Clientid:01:52:54:00:08:26:7d}
	I0229 17:41:06.724061   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined IP address 192.168.39.195 and MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:41:06.724277   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHPort
	I0229 17:41:06.724479   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHKeyPath
	I0229 17:41:06.724641   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHUsername
	I0229 17:41:06.725987   14730 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0229 17:41:06.724912   14730 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/addons-848237/id_rsa Username:docker}
	I0229 17:41:06.725682   14730 main.go:141] libmachine: (addons-848237) Calling .DriverName
	I0229 17:41:06.726833   14730 main.go:141] libmachine: (addons-848237) Calling .DriverName
	I0229 17:41:06.727156   14730 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0229 17:41:06.727169   14730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0229 17:41:06.727183   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHHostname
	I0229 17:41:06.728717   14730 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231226-1a7112e06
	I0229 17:41:06.727597   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:41:06.728158   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHPort
	I0229 17:41:06.729798   14730 main.go:141] libmachine: (addons-848237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:26:7d", ip: ""} in network mk-addons-848237: {Iface:virbr1 ExpiryTime:2024-02-29 18:40:28 +0000 UTC Type:0 Mac:52:54:00:08:26:7d Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-848237 Clientid:01:52:54:00:08:26:7d}
	I0229 17:41:06.730161   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:41:06.730658   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHPort
	I0229 17:41:06.731098   14730 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 17:41:06.733077   14730 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 17:41:06.733088   14730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 17:41:06.733098   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHHostname
	I0229 17:41:06.732042   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined IP address 192.168.39.195 and MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:41:06.732051   14730 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.6
	I0229 17:41:06.732062   14730 main.go:141] libmachine: (addons-848237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:26:7d", ip: ""} in network mk-addons-848237: {Iface:virbr1 ExpiryTime:2024-02-29 18:40:28 +0000 UTC Type:0 Mac:52:54:00:08:26:7d Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-848237 Clientid:01:52:54:00:08:26:7d}
	I0229 17:41:06.731263   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHKeyPath
	I0229 17:41:06.732175   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHKeyPath
	I0229 17:41:06.735364   14730 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231226-1a7112e06
	I0229 17:41:06.734342   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHUsername
	I0229 17:41:06.734356   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined IP address 192.168.39.195 and MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:41:06.734530   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHUsername
	I0229 17:41:06.735668   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:41:06.736489   14730 main.go:141] libmachine: (addons-848237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:26:7d", ip: ""} in network mk-addons-848237: {Iface:virbr1 ExpiryTime:2024-02-29 18:40:28 +0000 UTC Type:0 Mac:52:54:00:08:26:7d Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-848237 Clientid:01:52:54:00:08:26:7d}
	I0229 17:41:06.736502   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined IP address 192.168.39.195 and MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:41:06.736648   14730 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0229 17:41:06.736654   14730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I0229 17:41:06.736662   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHHostname
	I0229 17:41:06.736364   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHPort
	I0229 17:41:06.737179   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHKeyPath
	I0229 17:41:06.737219   14730 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/addons-848237/id_rsa Username:docker}
	I0229 17:41:06.737409   14730 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/addons-848237/id_rsa Username:docker}
	I0229 17:41:06.737535   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHUsername
	I0229 17:41:06.737668   14730 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/addons-848237/id_rsa Username:docker}
	I0229 17:41:06.740034   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:41:06.740398   14730 main.go:141] libmachine: (addons-848237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:26:7d", ip: ""} in network mk-addons-848237: {Iface:virbr1 ExpiryTime:2024-02-29 18:40:28 +0000 UTC Type:0 Mac:52:54:00:08:26:7d Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-848237 Clientid:01:52:54:00:08:26:7d}
	I0229 17:41:06.740418   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined IP address 192.168.39.195 and MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:41:06.740542   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHPort
	I0229 17:41:06.740657   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHKeyPath
	I0229 17:41:06.740736   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHUsername
	I0229 17:41:06.740807   14730 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/addons-848237/id_rsa Username:docker}
	I0229 17:41:06.742060   14730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41885
	I0229 17:41:06.742362   14730 main.go:141] libmachine: () Calling .GetVersion
	I0229 17:41:06.742752   14730 main.go:141] libmachine: Using API Version  1
	I0229 17:41:06.742765   14730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 17:41:06.743051   14730 main.go:141] libmachine: () Calling .GetMachineName
	I0229 17:41:06.743189   14730 main.go:141] libmachine: (addons-848237) Calling .GetState
	I0229 17:41:06.744426   14730 main.go:141] libmachine: (addons-848237) Calling .DriverName
	I0229 17:41:06.744591   14730 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 17:41:06.744605   14730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 17:41:06.744614   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHHostname
	I0229 17:41:06.746591   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:41:06.747217   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHPort
	I0229 17:41:06.747227   14730 main.go:141] libmachine: (addons-848237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:26:7d", ip: ""} in network mk-addons-848237: {Iface:virbr1 ExpiryTime:2024-02-29 18:40:28 +0000 UTC Type:0 Mac:52:54:00:08:26:7d Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-848237 Clientid:01:52:54:00:08:26:7d}
	I0229 17:41:06.747239   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined IP address 192.168.39.195 and MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:41:06.747336   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHKeyPath
	I0229 17:41:06.747416   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHUsername
	I0229 17:41:06.747516   14730 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/addons-848237/id_rsa Username:docker}
	I0229 17:41:06.939124   14730 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0229 17:41:06.942247   14730 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0229 17:41:06.969141   14730 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0229 17:41:06.969166   14730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0229 17:41:06.997456   14730 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0229 17:41:06.997475   14730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0229 17:41:07.010554   14730 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0229 17:41:07.037983   14730 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0229 17:41:07.059461   14730 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0229 17:41:07.059484   14730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0229 17:41:07.088525   14730 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-848237" context rescaled to 1 replicas
	I0229 17:41:07.088564   14730 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0229 17:41:07.090276   14730 out.go:177] * Verifying Kubernetes components...
	I0229 17:41:07.091422   14730 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 17:41:07.109856   14730 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0229 17:41:07.120862   14730 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 17:41:07.191587   14730 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0229 17:41:07.191606   14730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0229 17:41:07.222303   14730 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 17:41:07.252862   14730 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0229 17:41:07.252884   14730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0229 17:41:07.257085   14730 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0229 17:41:07.257103   14730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0229 17:41:07.266997   14730 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0229 17:41:07.267016   14730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0229 17:41:07.274013   14730 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0229 17:41:07.274034   14730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0229 17:41:07.279069   14730 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0229 17:41:07.279086   14730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0229 17:41:07.327842   14730 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0229 17:41:07.468021   14730 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0229 17:41:07.468043   14730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0229 17:41:07.513623   14730 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0229 17:41:07.578915   14730 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0229 17:41:07.578947   14730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0229 17:41:07.595003   14730 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0229 17:41:07.595037   14730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0229 17:41:07.630167   14730 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0229 17:41:07.630191   14730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0229 17:41:07.740931   14730 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0229 17:41:07.740956   14730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0229 17:41:07.759878   14730 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0229 17:41:07.759905   14730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0229 17:41:07.931174   14730 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 17:41:07.931196   14730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0229 17:41:07.944823   14730 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0229 17:41:07.944841   14730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0229 17:41:07.965289   14730 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0229 17:41:07.965305   14730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0229 17:41:08.003315   14730 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0229 17:41:08.003334   14730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0229 17:41:08.097914   14730 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0229 17:41:08.097944   14730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0229 17:41:08.129767   14730 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0229 17:41:08.254205   14730 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0229 17:41:08.254225   14730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0229 17:41:08.285828   14730 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0229 17:41:08.285851   14730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0229 17:41:08.303548   14730 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 17:41:08.332723   14730 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0229 17:41:08.332743   14730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0229 17:41:08.390230   14730 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0229 17:41:08.390254   14730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0229 17:41:08.539320   14730 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0229 17:41:08.539343   14730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0229 17:41:08.583879   14730 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0229 17:41:08.650088   14730 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0229 17:41:08.650114   14730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0229 17:41:08.680832   14730 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0229 17:41:08.680857   14730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0229 17:41:08.780938   14730 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0229 17:41:08.780958   14730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0229 17:41:08.933382   14730 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0229 17:41:08.996857   14730 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0229 17:41:08.996881   14730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0229 17:41:09.284513   14730 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0229 17:41:09.446685   14730 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0229 17:41:09.446708   14730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0229 17:41:09.665764   14730 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0229 17:41:09.665783   14730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0229 17:41:09.916552   14730 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0229 17:41:09.916576   14730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0229 17:41:10.151079   14730 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0229 17:41:10.151100   14730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0229 17:41:10.659004   14730 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0229 17:41:12.051062   14730 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.111903418s)
	I0229 17:41:12.051071   14730 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.10879291s)
	I0229 17:41:12.051118   14730 main.go:141] libmachine: Making call to close driver server
	I0229 17:41:12.051137   14730 main.go:141] libmachine: Making call to close driver server
	I0229 17:41:12.051147   14730 main.go:141] libmachine: (addons-848237) Calling .Close
	I0229 17:41:12.051154   14730 main.go:141] libmachine: (addons-848237) Calling .Close
	I0229 17:41:12.051449   14730 main.go:141] libmachine: (addons-848237) DBG | Closing plugin on server side
	I0229 17:41:12.051487   14730 main.go:141] libmachine: Successfully made call to close driver server
	I0229 17:41:12.051495   14730 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 17:41:12.051510   14730 main.go:141] libmachine: Making call to close driver server
	I0229 17:41:12.051518   14730 main.go:141] libmachine: (addons-848237) Calling .Close
	I0229 17:41:12.051621   14730 main.go:141] libmachine: (addons-848237) DBG | Closing plugin on server side
	I0229 17:41:12.051657   14730 main.go:141] libmachine: Successfully made call to close driver server
	I0229 17:41:12.051676   14730 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 17:41:12.051693   14730 main.go:141] libmachine: Making call to close driver server
	I0229 17:41:12.051702   14730 main.go:141] libmachine: (addons-848237) Calling .Close
	I0229 17:41:12.051853   14730 main.go:141] libmachine: Successfully made call to close driver server
	I0229 17:41:12.051866   14730 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 17:41:12.051882   14730 main.go:141] libmachine: (addons-848237) DBG | Closing plugin on server side
	I0229 17:41:12.051939   14730 main.go:141] libmachine: (addons-848237) DBG | Closing plugin on server side
	I0229 17:41:12.053177   14730 main.go:141] libmachine: Successfully made call to close driver server
	I0229 17:41:12.053196   14730 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 17:41:12.195254   14730 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.184674146s)
	I0229 17:41:12.195302   14730 main.go:141] libmachine: Making call to close driver server
	I0229 17:41:12.195309   14730 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (5.157301327s)
	I0229 17:41:12.195314   14730 main.go:141] libmachine: (addons-848237) Calling .Close
	I0229 17:41:12.195323   14730 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0229 17:41:12.195355   14730 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (5.103906495s)
	I0229 17:41:12.195731   14730 main.go:141] libmachine: (addons-848237) DBG | Closing plugin on server side
	I0229 17:41:12.195770   14730 main.go:141] libmachine: Successfully made call to close driver server
	I0229 17:41:12.195787   14730 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 17:41:12.195796   14730 main.go:141] libmachine: Making call to close driver server
	I0229 17:41:12.195804   14730 main.go:141] libmachine: (addons-848237) Calling .Close
	I0229 17:41:12.196033   14730 main.go:141] libmachine: Successfully made call to close driver server
	I0229 17:41:12.196049   14730 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 17:41:12.196052   14730 main.go:141] libmachine: (addons-848237) DBG | Closing plugin on server side
	I0229 17:41:12.196363   14730 node_ready.go:35] waiting up to 6m0s for node "addons-848237" to be "Ready" ...
	I0229 17:41:12.200296   14730 node_ready.go:49] node "addons-848237" has status "Ready":"True"
	I0229 17:41:12.200319   14730 node_ready.go:38] duration metric: took 3.932458ms waiting for node "addons-848237" to be "Ready" ...
	I0229 17:41:12.200329   14730 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 17:41:12.212110   14730 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-chwkn" in "kube-system" namespace to be "Ready" ...
	I0229 17:41:13.329992   14730 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0229 17:41:13.330025   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHHostname
	I0229 17:41:13.333825   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:41:13.334277   14730 main.go:141] libmachine: (addons-848237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:26:7d", ip: ""} in network mk-addons-848237: {Iface:virbr1 ExpiryTime:2024-02-29 18:40:28 +0000 UTC Type:0 Mac:52:54:00:08:26:7d Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-848237 Clientid:01:52:54:00:08:26:7d}
	I0229 17:41:13.334310   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined IP address 192.168.39.195 and MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:41:13.334459   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHPort
	I0229 17:41:13.334659   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHKeyPath
	I0229 17:41:13.334841   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHUsername
	I0229 17:41:13.334985   14730 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/addons-848237/id_rsa Username:docker}
	I0229 17:41:13.888421   14730 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0229 17:41:13.977760   14730 addons.go:234] Setting addon gcp-auth=true in "addons-848237"
	I0229 17:41:13.977804   14730 host.go:66] Checking if "addons-848237" exists ...
	I0229 17:41:13.978118   14730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 17:41:13.978144   14730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 17:41:13.993655   14730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43977
	I0229 17:41:13.994043   14730 main.go:141] libmachine: () Calling .GetVersion
	I0229 17:41:13.994524   14730 main.go:141] libmachine: Using API Version  1
	I0229 17:41:13.994552   14730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 17:41:13.994909   14730 main.go:141] libmachine: () Calling .GetMachineName
	I0229 17:41:13.995363   14730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 17:41:13.995392   14730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 17:41:14.010711   14730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36563
	I0229 17:41:14.011181   14730 main.go:141] libmachine: () Calling .GetVersion
	I0229 17:41:14.012196   14730 main.go:141] libmachine: Using API Version  1
	I0229 17:41:14.012211   14730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 17:41:14.012576   14730 main.go:141] libmachine: () Calling .GetMachineName
	I0229 17:41:14.012771   14730 main.go:141] libmachine: (addons-848237) Calling .GetState
	I0229 17:41:14.014554   14730 main.go:141] libmachine: (addons-848237) Calling .DriverName
	I0229 17:41:14.014786   14730 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0229 17:41:14.014810   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHHostname
	I0229 17:41:14.017277   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:41:14.017648   14730 main.go:141] libmachine: (addons-848237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:26:7d", ip: ""} in network mk-addons-848237: {Iface:virbr1 ExpiryTime:2024-02-29 18:40:28 +0000 UTC Type:0 Mac:52:54:00:08:26:7d Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-848237 Clientid:01:52:54:00:08:26:7d}
	I0229 17:41:14.017679   14730 main.go:141] libmachine: (addons-848237) DBG | domain addons-848237 has defined IP address 192.168.39.195 and MAC address 52:54:00:08:26:7d in network mk-addons-848237
	I0229 17:41:14.017844   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHPort
	I0229 17:41:14.018029   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHKeyPath
	I0229 17:41:14.018205   14730 main.go:141] libmachine: (addons-848237) Calling .GetSSHUsername
	I0229 17:41:14.018354   14730 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/addons-848237/id_rsa Username:docker}
	I0229 17:41:14.335256   14730 pod_ready.go:102] pod "coredns-5dd5756b68-chwkn" in "kube-system" namespace has status "Ready":"False"
	I0229 17:41:16.195415   14730 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.085520871s)
	I0229 17:41:16.195461   14730 main.go:141] libmachine: Making call to close driver server
	I0229 17:41:16.195472   14730 main.go:141] libmachine: (addons-848237) Calling .Close
	I0229 17:41:16.195472   14730 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.074581483s)
	I0229 17:41:16.195509   14730 main.go:141] libmachine: Making call to close driver server
	I0229 17:41:16.195528   14730 main.go:141] libmachine: (addons-848237) Calling .Close
	I0229 17:41:16.195570   14730 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.973244949s)
	I0229 17:41:16.195589   14730 main.go:141] libmachine: Making call to close driver server
	I0229 17:41:16.195598   14730 main.go:141] libmachine: (addons-848237) Calling .Close
	I0229 17:41:16.195700   14730 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.867833072s)
	I0229 17:41:16.195729   14730 main.go:141] libmachine: Making call to close driver server
	I0229 17:41:16.195732   14730 main.go:141] libmachine: (addons-848237) DBG | Closing plugin on server side
	I0229 17:41:16.195739   14730 main.go:141] libmachine: (addons-848237) Calling .Close
	I0229 17:41:16.195777   14730 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.682124326s)
	I0229 17:41:16.195792   14730 main.go:141] libmachine: Making call to close driver server
	I0229 17:41:16.195803   14730 main.go:141] libmachine: (addons-848237) Calling .Close
	I0229 17:41:16.195819   14730 main.go:141] libmachine: Successfully made call to close driver server
	I0229 17:41:16.195828   14730 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 17:41:16.195837   14730 main.go:141] libmachine: Making call to close driver server
	I0229 17:41:16.195844   14730 main.go:141] libmachine: (addons-848237) Calling .Close
	I0229 17:41:16.195869   14730 main.go:141] libmachine: Successfully made call to close driver server
	I0229 17:41:16.195877   14730 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.06607762s)
	I0229 17:41:16.195879   14730 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 17:41:16.195890   14730 main.go:141] libmachine: Making call to close driver server
	I0229 17:41:16.195893   14730 main.go:141] libmachine: Making call to close driver server
	I0229 17:41:16.195897   14730 main.go:141] libmachine: (addons-848237) Calling .Close
	I0229 17:41:16.195902   14730 main.go:141] libmachine: (addons-848237) Calling .Close
	I0229 17:41:16.195982   14730 main.go:141] libmachine: (addons-848237) DBG | Closing plugin on server side
	I0229 17:41:16.196005   14730 main.go:141] libmachine: Successfully made call to close driver server
	I0229 17:41:16.196007   14730 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.892433875s)
	I0229 17:41:16.196016   14730 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 17:41:16.196025   14730 main.go:141] libmachine: Making call to close driver server
	I0229 17:41:16.196025   14730 main.go:141] libmachine: Making call to close driver server
	I0229 17:41:16.196033   14730 main.go:141] libmachine: (addons-848237) Calling .Close
	I0229 17:41:16.196035   14730 main.go:141] libmachine: Successfully made call to close driver server
	I0229 17:41:16.196043   14730 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 17:41:16.196051   14730 main.go:141] libmachine: Successfully made call to close driver server
	I0229 17:41:16.196059   14730 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 17:41:16.196059   14730 main.go:141] libmachine: (addons-848237) Calling .Close
	I0229 17:41:16.196069   14730 addons.go:470] Verifying addon ingress=true in "addons-848237"
	I0229 17:41:16.197845   14730 out.go:177] * Verifying ingress addon...
	I0229 17:41:16.197839   14730 main.go:141] libmachine: (addons-848237) DBG | Closing plugin on server side
	I0229 17:41:16.196152   14730 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.612247619s)
	I0229 17:41:16.196241   14730 main.go:141] libmachine: (addons-848237) DBG | Closing plugin on server side
	I0229 17:41:16.196243   14730 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.262835342s)
	I0229 17:41:16.196052   14730 main.go:141] libmachine: Making call to close driver server
	I0229 17:41:16.196269   14730 main.go:141] libmachine: Successfully made call to close driver server
	I0229 17:41:16.196300   14730 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.911757286s)
	I0229 17:41:16.196740   14730 main.go:141] libmachine: (addons-848237) DBG | Closing plugin on server side
	I0229 17:41:16.196769   14730 main.go:141] libmachine: Successfully made call to close driver server
	I0229 17:41:16.197716   14730 main.go:141] libmachine: Successfully made call to close driver server
	I0229 17:41:16.197750   14730 main.go:141] libmachine: (addons-848237) DBG | Closing plugin on server side
	I0229 17:41:16.197825   14730 main.go:141] libmachine: Successfully made call to close driver server
	I0229 17:41:16.199105   14730 main.go:141] libmachine: Making call to close connection to plugin binary
	W0229 17:41:16.199121   14730 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0229 17:41:16.199141   14730 retry.go:31] will retry after 301.253777ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0229 17:41:16.199164   14730 main.go:141] libmachine: Making call to close driver server
	I0229 17:41:16.199172   14730 main.go:141] libmachine: (addons-848237) Calling .Close
	I0229 17:41:16.199223   14730 main.go:141] libmachine: (addons-848237) Calling .Close
	I0229 17:41:16.199253   14730 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 17:41:16.199261   14730 main.go:141] libmachine: Making call to close driver server
	I0229 17:41:16.199268   14730 main.go:141] libmachine: (addons-848237) Calling .Close
	I0229 17:41:16.199268   14730 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 17:41:16.199278   14730 main.go:141] libmachine: Making call to close driver server
	I0229 17:41:16.199285   14730 main.go:141] libmachine: (addons-848237) Calling .Close
	I0229 17:41:16.199302   14730 main.go:141] libmachine: Making call to close driver server
	I0229 17:41:16.199310   14730 main.go:141] libmachine: (addons-848237) Calling .Close
	I0229 17:41:16.199316   14730 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 17:41:16.199324   14730 main.go:141] libmachine: Making call to close driver server
	I0229 17:41:16.199330   14730 main.go:141] libmachine: (addons-848237) Calling .Close
	I0229 17:41:16.199413   14730 main.go:141] libmachine: (addons-848237) DBG | Closing plugin on server side
	I0229 17:41:16.199442   14730 main.go:141] libmachine: Successfully made call to close driver server
	I0229 17:41:16.199446   14730 main.go:141] libmachine: Successfully made call to close driver server
	I0229 17:41:16.199452   14730 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 17:41:16.199454   14730 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 17:41:16.199460   14730 main.go:141] libmachine: Making call to close driver server
	I0229 17:41:16.199466   14730 main.go:141] libmachine: (addons-848237) Calling .Close
	I0229 17:41:16.199753   14730 main.go:141] libmachine: (addons-848237) DBG | Closing plugin on server side
	I0229 17:41:16.199772   14730 main.go:141] libmachine: (addons-848237) DBG | Closing plugin on server side
	I0229 17:41:16.199795   14730 main.go:141] libmachine: Successfully made call to close driver server
	I0229 17:41:16.199802   14730 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 17:41:16.201585   14730 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-848237 service yakd-dashboard -n yakd-dashboard
	
	I0229 17:41:16.200484   14730 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0229 17:41:16.200515   14730 main.go:141] libmachine: (addons-848237) DBG | Closing plugin on server side
	I0229 17:41:16.200537   14730 main.go:141] libmachine: Successfully made call to close driver server
	I0229 17:41:16.200555   14730 main.go:141] libmachine: Successfully made call to close driver server
	I0229 17:41:16.200569   14730 main.go:141] libmachine: Successfully made call to close driver server
	I0229 17:41:16.200582   14730 main.go:141] libmachine: (addons-848237) DBG | Closing plugin on server side
	I0229 17:41:16.200601   14730 main.go:141] libmachine: Successfully made call to close driver server
	I0229 17:41:16.200614   14730 main.go:141] libmachine: (addons-848237) DBG | Closing plugin on server side
	I0229 17:41:16.202389   14730 main.go:141] libmachine: (addons-848237) DBG | Closing plugin on server side
	I0229 17:41:16.202409   14730 main.go:141] libmachine: Successfully made call to close driver server
	I0229 17:41:16.202902   14730 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 17:41:16.202971   14730 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 17:41:16.202982   14730 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 17:41:16.202989   14730 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 17:41:16.202996   14730 addons.go:470] Verifying addon registry=true in "addons-848237"
	I0229 17:41:16.202998   14730 addons.go:470] Verifying addon metrics-server=true in "addons-848237"
	I0229 17:41:16.203032   14730 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 17:41:16.203045   14730 main.go:141] libmachine: Making call to close driver server
	I0229 17:41:16.203053   14730 main.go:141] libmachine: (addons-848237) Calling .Close
	I0229 17:41:16.204598   14730 out.go:177] * Verifying registry addon...
	I0229 17:41:16.203347   14730 main.go:141] libmachine: (addons-848237) DBG | Closing plugin on server side
	I0229 17:41:16.203378   14730 main.go:141] libmachine: Successfully made call to close driver server
	I0229 17:41:16.205851   14730 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 17:41:16.206619   14730 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0229 17:41:16.220736   14730 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0229 17:41:16.220754   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:16.248607   14730 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0229 17:41:16.248628   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:41:16.295991   14730 main.go:141] libmachine: Making call to close driver server
	I0229 17:41:16.296017   14730 main.go:141] libmachine: (addons-848237) Calling .Close
	I0229 17:41:16.296265   14730 main.go:141] libmachine: Successfully made call to close driver server
	I0229 17:41:16.296280   14730 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 17:41:16.296288   14730 main.go:141] libmachine: (addons-848237) DBG | Closing plugin on server side
	W0229 17:41:16.296400   14730 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0229 17:41:16.310869   14730 main.go:141] libmachine: Making call to close driver server
	I0229 17:41:16.310890   14730 main.go:141] libmachine: (addons-848237) Calling .Close
	I0229 17:41:16.311207   14730 main.go:141] libmachine: Successfully made call to close driver server
	I0229 17:41:16.311228   14730 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 17:41:16.500561   14730 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0229 17:41:16.708148   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:16.714056   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:41:16.720119   14730 pod_ready.go:102] pod "coredns-5dd5756b68-chwkn" in "kube-system" namespace has status "Ready":"False"
	I0229 17:41:17.342403   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:41:17.342675   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:17.712692   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:17.727620   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:41:18.215681   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:18.236134   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:41:18.784159   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:18.797046   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:41:18.846974   14730 pod_ready.go:102] pod "coredns-5dd5756b68-chwkn" in "kube-system" namespace has status "Ready":"False"
	I0229 17:41:19.206122   14730 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (8.547060244s)
	I0229 17:41:19.206168   14730 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (5.191364773s)
	I0229 17:41:19.206172   14730 main.go:141] libmachine: Making call to close driver server
	I0229 17:41:19.206185   14730 main.go:141] libmachine: (addons-848237) Calling .Close
	I0229 17:41:19.207782   14730 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231226-1a7112e06
	I0229 17:41:19.206442   14730 main.go:141] libmachine: Successfully made call to close driver server
	I0229 17:41:19.206445   14730 main.go:141] libmachine: (addons-848237) DBG | Closing plugin on server side
	I0229 17:41:19.210169   14730 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.1
	I0229 17:41:19.209012   14730 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 17:41:19.211338   14730 main.go:141] libmachine: Making call to close driver server
	I0229 17:41:19.211350   14730 main.go:141] libmachine: (addons-848237) Calling .Close
	I0229 17:41:19.211380   14730 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0229 17:41:19.211398   14730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0229 17:41:19.211671   14730 main.go:141] libmachine: (addons-848237) DBG | Closing plugin on server side
	I0229 17:41:19.211675   14730 main.go:141] libmachine: Successfully made call to close driver server
	I0229 17:41:19.211691   14730 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 17:41:19.211711   14730 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-848237"
	I0229 17:41:19.213114   14730 out.go:177] * Verifying csi-hostpath-driver addon...
	I0229 17:41:19.215259   14730 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0229 17:41:19.257615   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:19.257720   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:41:19.275966   14730 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0229 17:41:19.275991   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:19.324275   14730 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0229 17:41:19.324297   14730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0229 17:41:19.389821   14730 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0229 17:41:19.389839   14730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5447 bytes)
	I0229 17:41:19.454245   14730 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0229 17:41:19.708342   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:19.711186   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:41:19.722996   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:20.058150   14730 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.557538869s)
	I0229 17:41:20.058206   14730 main.go:141] libmachine: Making call to close driver server
	I0229 17:41:20.058223   14730 main.go:141] libmachine: (addons-848237) Calling .Close
	I0229 17:41:20.058470   14730 main.go:141] libmachine: Successfully made call to close driver server
	I0229 17:41:20.058532   14730 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 17:41:20.058556   14730 main.go:141] libmachine: (addons-848237) DBG | Closing plugin on server side
	I0229 17:41:20.058617   14730 main.go:141] libmachine: Making call to close driver server
	I0229 17:41:20.058641   14730 main.go:141] libmachine: (addons-848237) Calling .Close
	I0229 17:41:20.058875   14730 main.go:141] libmachine: (addons-848237) DBG | Closing plugin on server side
	I0229 17:41:20.058913   14730 main.go:141] libmachine: Successfully made call to close driver server
	I0229 17:41:20.058926   14730 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 17:41:20.208391   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:20.222884   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:41:20.228151   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:20.708243   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:20.711897   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:41:20.724216   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:21.021060   14730 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.566782347s)
	I0229 17:41:21.021112   14730 main.go:141] libmachine: Making call to close driver server
	I0229 17:41:21.021125   14730 main.go:141] libmachine: (addons-848237) Calling .Close
	I0229 17:41:21.021502   14730 main.go:141] libmachine: Successfully made call to close driver server
	I0229 17:41:21.021520   14730 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 17:41:21.021535   14730 main.go:141] libmachine: Making call to close driver server
	I0229 17:41:21.021538   14730 main.go:141] libmachine: (addons-848237) DBG | Closing plugin on server side
	I0229 17:41:21.021543   14730 main.go:141] libmachine: (addons-848237) Calling .Close
	I0229 17:41:21.021759   14730 main.go:141] libmachine: Successfully made call to close driver server
	I0229 17:41:21.021779   14730 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 17:41:21.021788   14730 main.go:141] libmachine: (addons-848237) DBG | Closing plugin on server side
	I0229 17:41:21.022572   14730 addons.go:470] Verifying addon gcp-auth=true in "addons-848237"
	I0229 17:41:21.025441   14730 out.go:177] * Verifying gcp-auth addon...
	I0229 17:41:21.028303   14730 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0229 17:41:21.037096   14730 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0229 17:41:21.037115   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:21.207651   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:21.213910   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:41:21.225097   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:21.227553   14730 pod_ready.go:102] pod "coredns-5dd5756b68-chwkn" in "kube-system" namespace has status "Ready":"False"
	I0229 17:41:21.532295   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:21.708277   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:21.711966   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:41:21.722220   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:22.032594   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:22.207906   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:22.227101   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:41:22.229832   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:22.531939   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:22.708901   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:22.711141   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:41:22.720426   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:23.032719   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:23.208259   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:23.214435   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:41:23.223058   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:23.532324   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:23.710496   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:23.724238   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:41:23.741115   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:23.757306   14730 pod_ready.go:102] pod "coredns-5dd5756b68-chwkn" in "kube-system" namespace has status "Ready":"False"
	I0229 17:41:24.127979   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:24.209566   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:24.212062   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:41:24.226150   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:24.540849   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:24.710763   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:24.712638   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:41:24.720563   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:25.032983   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:25.208718   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:25.212026   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:41:25.227071   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:25.534189   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:25.708101   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:25.711149   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:41:25.721859   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:26.032772   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:26.213636   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:26.216408   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:41:26.221709   14730 pod_ready.go:102] pod "coredns-5dd5756b68-chwkn" in "kube-system" namespace has status "Ready":"False"
	I0229 17:41:26.231636   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:26.533632   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:26.708145   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:26.711420   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:41:26.724814   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:27.034803   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:27.208424   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:27.211185   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:41:27.222885   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:27.533471   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:27.707935   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:27.711848   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:41:27.720312   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:28.032744   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:28.209472   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:28.216528   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:41:28.220824   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:28.222889   14730 pod_ready.go:102] pod "coredns-5dd5756b68-chwkn" in "kube-system" namespace has status "Ready":"False"
	I0229 17:41:28.533979   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:28.709759   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:28.730970   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:41:28.732140   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:29.032090   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:29.208004   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:29.211034   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:41:29.220849   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:29.533210   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:29.708608   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:29.724240   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:41:29.730698   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:30.032182   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:30.208446   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:30.214428   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:41:30.240498   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:30.241118   14730 pod_ready.go:102] pod "coredns-5dd5756b68-chwkn" in "kube-system" namespace has status "Ready":"False"
	I0229 17:41:30.800661   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:30.800894   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:41:30.811147   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:30.811699   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:31.033563   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:31.208528   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:31.212152   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:41:31.222089   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:31.533385   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:31.709637   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:31.712237   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:41:31.724686   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:32.033192   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:32.207524   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:32.212580   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:41:32.220408   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:32.532358   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:32.707840   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:32.712156   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:41:32.718779   14730 pod_ready.go:102] pod "coredns-5dd5756b68-chwkn" in "kube-system" namespace has status "Ready":"False"
	I0229 17:41:32.724458   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:33.033088   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:33.207747   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:33.210942   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:41:33.242275   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:33.532118   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:33.708770   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:33.712384   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:41:33.724246   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:34.032302   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:34.210003   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:34.212459   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:41:34.225600   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:34.534013   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:34.708217   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:34.711598   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:41:34.721671   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:34.722832   14730 pod_ready.go:102] pod "coredns-5dd5756b68-chwkn" in "kube-system" namespace has status "Ready":"False"
	I0229 17:41:35.032896   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:35.208764   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:35.211770   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:41:35.220468   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:35.532426   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:35.707876   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:35.710901   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:41:35.720673   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:36.033372   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:36.208802   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:36.215227   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:41:36.223545   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:36.533882   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:36.710122   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:36.713178   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:41:36.723574   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:36.725311   14730 pod_ready.go:102] pod "coredns-5dd5756b68-chwkn" in "kube-system" namespace has status "Ready":"False"
	I0229 17:41:37.033208   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:37.213457   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:37.214222   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:41:37.232745   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:37.533151   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:37.709793   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:37.712097   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:41:37.723822   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:38.033953   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:38.208290   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:38.212376   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:41:38.224119   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:38.713072   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:38.714662   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:38.714901   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:41:38.722520   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:39.032720   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:39.207912   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:39.211701   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:41:39.223394   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:39.226469   14730 pod_ready.go:102] pod "coredns-5dd5756b68-chwkn" in "kube-system" namespace has status "Ready":"False"
	I0229 17:41:39.532253   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:39.707688   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:39.711431   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:41:39.730381   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:40.033276   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:40.208282   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:40.212355   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:41:40.229789   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:40.532009   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:40.709180   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:40.712523   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:41:40.719998   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:41.032105   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:41.207637   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:41.211214   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:41:41.222941   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:41.532222   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:41.708750   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:41.711643   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:41:41.717387   14730 pod_ready.go:102] pod "coredns-5dd5756b68-chwkn" in "kube-system" namespace has status "Ready":"False"
	I0229 17:41:41.720135   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:42.033029   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:42.207513   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:42.211143   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:41:42.220745   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:42.533608   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:42.708326   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:42.712404   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:41:42.721029   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:43.032567   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:43.208543   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:43.212858   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:41:43.224302   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:43.532193   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:43.707963   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:43.720790   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:41:43.724040   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:43.724966   14730 pod_ready.go:102] pod "coredns-5dd5756b68-chwkn" in "kube-system" namespace has status "Ready":"False"
	I0229 17:41:44.033371   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:44.209003   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:44.213207   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:41:44.220696   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:44.532278   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:44.708472   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:44.711415   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:41:44.723063   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:45.032736   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:45.208189   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:45.211147   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:41:45.220937   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:45.532746   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:45.708398   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:45.713479   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:41:45.725265   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:46.034078   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:46.209724   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:46.214984   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:41:46.228092   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:46.228648   14730 pod_ready.go:102] pod "coredns-5dd5756b68-chwkn" in "kube-system" namespace has status "Ready":"False"
	I0229 17:41:46.532577   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:46.708275   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:46.711051   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:41:46.723532   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:47.032310   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:47.208017   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:47.212830   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:41:47.220027   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:47.532262   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:47.713428   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:47.714486   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:41:47.727825   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:48.032721   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:48.495538   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:48.498896   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:41:48.508878   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:48.522377   14730 pod_ready.go:102] pod "coredns-5dd5756b68-chwkn" in "kube-system" namespace has status "Ready":"False"
	I0229 17:41:48.532361   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:48.707684   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:48.710700   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:41:48.722401   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:49.033055   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:49.207586   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:49.212003   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:41:49.220566   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:49.557293   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:49.709023   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:49.714795   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:41:49.726075   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:49.729966   14730 pod_ready.go:92] pod "coredns-5dd5756b68-chwkn" in "kube-system" namespace has status "Ready":"True"
	I0229 17:41:49.729989   14730 pod_ready.go:81] duration metric: took 37.517851329s waiting for pod "coredns-5dd5756b68-chwkn" in "kube-system" namespace to be "Ready" ...
	I0229 17:41:49.730001   14730 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-848237" in "kube-system" namespace to be "Ready" ...
	I0229 17:41:49.738825   14730 pod_ready.go:92] pod "etcd-addons-848237" in "kube-system" namespace has status "Ready":"True"
	I0229 17:41:49.738844   14730 pod_ready.go:81] duration metric: took 8.835286ms waiting for pod "etcd-addons-848237" in "kube-system" namespace to be "Ready" ...
	I0229 17:41:49.738856   14730 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-848237" in "kube-system" namespace to be "Ready" ...
	I0229 17:41:49.743780   14730 pod_ready.go:92] pod "kube-apiserver-addons-848237" in "kube-system" namespace has status "Ready":"True"
	I0229 17:41:49.743797   14730 pod_ready.go:81] duration metric: took 4.934483ms waiting for pod "kube-apiserver-addons-848237" in "kube-system" namespace to be "Ready" ...
	I0229 17:41:49.743817   14730 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-848237" in "kube-system" namespace to be "Ready" ...
	I0229 17:41:49.752621   14730 pod_ready.go:92] pod "kube-controller-manager-addons-848237" in "kube-system" namespace has status "Ready":"True"
	I0229 17:41:49.752640   14730 pod_ready.go:81] duration metric: took 8.816464ms waiting for pod "kube-controller-manager-addons-848237" in "kube-system" namespace to be "Ready" ...
	I0229 17:41:49.752656   14730 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hjjrx" in "kube-system" namespace to be "Ready" ...
	I0229 17:41:49.758861   14730 pod_ready.go:92] pod "kube-proxy-hjjrx" in "kube-system" namespace has status "Ready":"True"
	I0229 17:41:49.758878   14730 pod_ready.go:81] duration metric: took 6.214926ms waiting for pod "kube-proxy-hjjrx" in "kube-system" namespace to be "Ready" ...
	I0229 17:41:49.758890   14730 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-848237" in "kube-system" namespace to be "Ready" ...
	I0229 17:41:50.032374   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:50.118258   14730 pod_ready.go:92] pod "kube-scheduler-addons-848237" in "kube-system" namespace has status "Ready":"True"
	I0229 17:41:50.118280   14730 pod_ready.go:81] duration metric: took 359.382997ms waiting for pod "kube-scheduler-addons-848237" in "kube-system" namespace to be "Ready" ...
	I0229 17:41:50.118289   14730 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-zd2r4" in "kube-system" namespace to be "Ready" ...
	I0229 17:41:50.207642   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:50.210753   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:41:50.220489   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:50.518696   14730 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-zd2r4" in "kube-system" namespace has status "Ready":"True"
	I0229 17:41:50.518719   14730 pod_ready.go:81] duration metric: took 400.423896ms waiting for pod "nvidia-device-plugin-daemonset-zd2r4" in "kube-system" namespace to be "Ready" ...
	I0229 17:41:50.518726   14730 pod_ready.go:38] duration metric: took 38.318385342s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 17:41:50.518740   14730 api_server.go:52] waiting for apiserver process to appear ...
	I0229 17:41:50.518786   14730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 17:41:50.537944   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:50.544579   14730 api_server.go:72] duration metric: took 43.455986853s to wait for apiserver process to appear ...
	I0229 17:41:50.544603   14730 api_server.go:88] waiting for apiserver healthz status ...
	I0229 17:41:50.544622   14730 api_server.go:253] Checking apiserver healthz at https://192.168.39.195:8443/healthz ...
	I0229 17:41:50.551062   14730 api_server.go:279] https://192.168.39.195:8443/healthz returned 200:
	ok
	I0229 17:41:50.552362   14730 api_server.go:141] control plane version: v1.28.4
	I0229 17:41:50.552382   14730 api_server.go:131] duration metric: took 7.773064ms to wait for apiserver health ...
	I0229 17:41:50.552390   14730 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 17:41:50.708645   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:50.711761   14730 kapi.go:107] duration metric: took 34.505140434s to wait for kubernetes.io/minikube-addons=registry ...
	I0229 17:41:50.724693   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:50.724883   14730 system_pods.go:59] 18 kube-system pods found
	I0229 17:41:50.724907   14730 system_pods.go:61] "coredns-5dd5756b68-chwkn" [a75092ae-0227-4c0b-ae9d-5f885b87f382] Running
	I0229 17:41:50.724915   14730 system_pods.go:61] "csi-hostpath-attacher-0" [61d6e1e2-93ee-4dd5-ad62-f7a8d2b2f4ca] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0229 17:41:50.724924   14730 system_pods.go:61] "csi-hostpath-resizer-0" [483406c3-b4c2-4982-ba8c-439a5c2a740c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0229 17:41:50.724932   14730 system_pods.go:61] "csi-hostpathplugin-xlhrd" [e0c9c1c1-da82-4683-9815-109b818d8551] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0229 17:41:50.724939   14730 system_pods.go:61] "etcd-addons-848237" [0ae7caa6-b6e9-44d8-98b3-607a48df234f] Running
	I0229 17:41:50.724943   14730 system_pods.go:61] "kube-apiserver-addons-848237" [4832f9f4-86e6-4cd3-a2b7-4e470495ec5c] Running
	I0229 17:41:50.724946   14730 system_pods.go:61] "kube-controller-manager-addons-848237" [d684b82a-b4a5-4dd6-b436-16eec09b5639] Running
	I0229 17:41:50.724952   14730 system_pods.go:61] "kube-ingress-dns-minikube" [c179431b-fde2-47e6-b813-21fc946a70af] Running
	I0229 17:41:50.724955   14730 system_pods.go:61] "kube-proxy-hjjrx" [41fca01a-d52e-43f4-a5ba-57b73e13d971] Running
	I0229 17:41:50.724957   14730 system_pods.go:61] "kube-scheduler-addons-848237" [0529991d-349c-4d60-83ca-1059f7534901] Running
	I0229 17:41:50.724962   14730 system_pods.go:61] "metrics-server-69cf46c98-rhml2" [b0f01afc-d498-421c-8612-a6deac805806] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 17:41:50.724968   14730 system_pods.go:61] "nvidia-device-plugin-daemonset-zd2r4" [a3ce85f6-cadd-4e86-b3db-77445eb8f021] Running
	I0229 17:41:50.724971   14730 system_pods.go:61] "registry-proxy-676t7" [9371b307-e44d-4f2a-ba6a-e6c43f46f6e3] Running
	I0229 17:41:50.724974   14730 system_pods.go:61] "registry-ztkrj" [c7f086f8-8e7c-4a01-88e8-e51d7edef88b] Running
	I0229 17:41:50.724981   14730 system_pods.go:61] "snapshot-controller-58dbcc7b99-77vp2" [4a4cdfb3-12ee-4bfd-8dd2-36a3f2e5be23] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0229 17:41:50.724987   14730 system_pods.go:61] "snapshot-controller-58dbcc7b99-bpkgs" [4fe3ae32-b9cc-4fbd-90b4-a4af9d44b8ab] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0229 17:41:50.724993   14730 system_pods.go:61] "storage-provisioner" [1d7c049f-ea7d-4015-b329-8a2d4bff29d7] Running
	I0229 17:41:50.724997   14730 system_pods.go:61] "tiller-deploy-7b677967b9-w4gtk" [fd013549-9d6e-4dae-8e37-ecc25403919b] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0229 17:41:50.725003   14730 system_pods.go:74] duration metric: took 172.608524ms to wait for pod list to return data ...
	I0229 17:41:50.725013   14730 default_sa.go:34] waiting for default service account to be created ...
	I0229 17:41:50.917032   14730 default_sa.go:45] found service account: "default"
	I0229 17:41:50.917055   14730 default_sa.go:55] duration metric: took 192.035707ms for default service account to be created ...
	I0229 17:41:50.917063   14730 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 17:41:51.033855   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:51.125337   14730 system_pods.go:86] 18 kube-system pods found
	I0229 17:41:51.125366   14730 system_pods.go:89] "coredns-5dd5756b68-chwkn" [a75092ae-0227-4c0b-ae9d-5f885b87f382] Running
	I0229 17:41:51.125374   14730 system_pods.go:89] "csi-hostpath-attacher-0" [61d6e1e2-93ee-4dd5-ad62-f7a8d2b2f4ca] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0229 17:41:51.125380   14730 system_pods.go:89] "csi-hostpath-resizer-0" [483406c3-b4c2-4982-ba8c-439a5c2a740c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0229 17:41:51.125388   14730 system_pods.go:89] "csi-hostpathplugin-xlhrd" [e0c9c1c1-da82-4683-9815-109b818d8551] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0229 17:41:51.125393   14730 system_pods.go:89] "etcd-addons-848237" [0ae7caa6-b6e9-44d8-98b3-607a48df234f] Running
	I0229 17:41:51.125398   14730 system_pods.go:89] "kube-apiserver-addons-848237" [4832f9f4-86e6-4cd3-a2b7-4e470495ec5c] Running
	I0229 17:41:51.125402   14730 system_pods.go:89] "kube-controller-manager-addons-848237" [d684b82a-b4a5-4dd6-b436-16eec09b5639] Running
	I0229 17:41:51.125406   14730 system_pods.go:89] "kube-ingress-dns-minikube" [c179431b-fde2-47e6-b813-21fc946a70af] Running
	I0229 17:41:51.125410   14730 system_pods.go:89] "kube-proxy-hjjrx" [41fca01a-d52e-43f4-a5ba-57b73e13d971] Running
	I0229 17:41:51.125414   14730 system_pods.go:89] "kube-scheduler-addons-848237" [0529991d-349c-4d60-83ca-1059f7534901] Running
	I0229 17:41:51.125420   14730 system_pods.go:89] "metrics-server-69cf46c98-rhml2" [b0f01afc-d498-421c-8612-a6deac805806] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 17:41:51.125424   14730 system_pods.go:89] "nvidia-device-plugin-daemonset-zd2r4" [a3ce85f6-cadd-4e86-b3db-77445eb8f021] Running
	I0229 17:41:51.125428   14730 system_pods.go:89] "registry-proxy-676t7" [9371b307-e44d-4f2a-ba6a-e6c43f46f6e3] Running
	I0229 17:41:51.125432   14730 system_pods.go:89] "registry-ztkrj" [c7f086f8-8e7c-4a01-88e8-e51d7edef88b] Running
	I0229 17:41:51.125443   14730 system_pods.go:89] "snapshot-controller-58dbcc7b99-77vp2" [4a4cdfb3-12ee-4bfd-8dd2-36a3f2e5be23] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0229 17:41:51.125451   14730 system_pods.go:89] "snapshot-controller-58dbcc7b99-bpkgs" [4fe3ae32-b9cc-4fbd-90b4-a4af9d44b8ab] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0229 17:41:51.125456   14730 system_pods.go:89] "storage-provisioner" [1d7c049f-ea7d-4015-b329-8a2d4bff29d7] Running
	I0229 17:41:51.125464   14730 system_pods.go:89] "tiller-deploy-7b677967b9-w4gtk" [fd013549-9d6e-4dae-8e37-ecc25403919b] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0229 17:41:51.125470   14730 system_pods.go:126] duration metric: took 208.401906ms to wait for k8s-apps to be running ...
	I0229 17:41:51.125480   14730 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 17:41:51.125522   14730 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 17:41:51.172059   14730 system_svc.go:56] duration metric: took 46.568991ms WaitForService to wait for kubelet.
	I0229 17:41:51.172086   14730 kubeadm.go:581] duration metric: took 44.083499469s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 17:41:51.172102   14730 node_conditions.go:102] verifying NodePressure condition ...
	I0229 17:41:51.208136   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:51.221931   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:51.316689   14730 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 17:41:51.316714   14730 node_conditions.go:123] node cpu capacity is 2
	I0229 17:41:51.316726   14730 node_conditions.go:105] duration metric: took 144.619257ms to run NodePressure ...
	I0229 17:41:51.316736   14730 start.go:228] waiting for startup goroutines ...
	I0229 17:41:51.531623   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:51.708277   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:51.721412   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:52.032305   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:52.207840   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:52.221091   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:52.532365   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:52.744387   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:52.745149   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:53.032906   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:53.209511   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:53.221039   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:53.533471   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:53.708747   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:53.721391   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:54.033143   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:54.208101   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:54.220080   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:54.531829   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:54.708484   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:54.720646   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:55.032914   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:55.209006   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:55.222214   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:55.533094   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:55.707876   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:55.720054   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:56.034306   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:56.206943   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:56.220633   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:56.538293   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:56.708438   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:56.720731   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:57.033074   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:57.208463   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:57.220437   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:57.532492   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:57.709917   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:57.722850   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:58.034832   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:58.208720   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:58.221264   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:58.532917   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:58.708218   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:58.719729   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:59.032992   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:59.208478   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:59.220716   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:41:59.532770   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:41:59.709607   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:41:59.721434   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:00.033144   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:00.208620   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:42:00.220806   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:00.532426   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:00.708257   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:42:00.720589   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:01.032789   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:01.208581   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:42:01.221699   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:01.541222   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:01.707430   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:42:01.728272   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:02.032771   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:02.208350   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:42:02.221937   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:02.532004   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:02.709136   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:42:02.724944   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:03.031999   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:03.208404   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:42:03.220759   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:03.533396   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:03.707783   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:42:03.721003   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:04.207905   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:04.209721   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:42:04.220891   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:04.534941   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:04.708579   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:42:04.722156   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:05.032316   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:05.208310   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:42:05.221072   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:05.532083   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:05.707472   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:42:05.722059   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:06.032534   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:06.207813   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:42:06.223478   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:06.532509   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:06.708661   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:42:06.722465   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:07.032639   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:07.208373   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:42:07.221067   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:07.533128   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:07.708252   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:42:07.726386   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:08.032458   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:08.207936   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:42:08.219847   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:08.532745   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:08.708094   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:42:08.720874   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:09.032583   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:09.209444   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:42:09.221081   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:09.532502   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:09.709540   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:42:09.721203   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:10.032315   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:10.208298   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:42:10.221013   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:10.532159   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:10.707662   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:42:10.721465   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:11.032891   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:11.208801   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:42:11.221017   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:11.532845   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:11.707970   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:42:11.721427   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:12.032516   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:12.210122   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:42:12.223012   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:12.532443   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:12.826628   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:42:12.831759   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:13.033024   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:13.209082   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:42:13.221157   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:13.532602   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:13.708261   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:42:13.720557   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:14.032589   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:14.208136   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:42:14.220201   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:14.531699   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:14.708256   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:42:14.720599   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:15.261642   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:15.262262   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:15.262564   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:42:15.533032   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:15.709219   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:42:15.720674   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:16.041534   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:16.207648   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:42:16.220904   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:16.531961   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:16.707918   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:42:16.724948   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:17.034584   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:17.211170   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:42:17.232718   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:17.532801   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:17.708393   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:42:17.725289   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:18.042517   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:18.208301   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:42:18.222207   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:18.532015   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:18.707340   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:42:18.721064   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:19.032767   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:19.219477   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:42:19.230261   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:19.592681   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:19.711426   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:42:19.728248   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:20.031964   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:20.208891   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:42:20.221901   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:20.533931   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:20.708339   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:42:20.720768   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:21.033227   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:21.208339   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:42:21.221839   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:21.534220   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:21.711178   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:42:21.729065   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:22.032030   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:22.209406   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:42:22.222162   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:22.534564   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:22.713412   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:42:22.724900   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:23.032005   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:23.211049   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:42:23.233174   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:23.554356   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:23.708725   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:42:23.721337   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:24.049633   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:24.207786   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:42:24.222677   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:24.532813   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:24.708155   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:42:24.721248   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:25.032959   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:25.208139   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:42:25.220360   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:25.532070   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:25.707361   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:42:25.720911   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:26.031925   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:26.208616   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:42:26.220485   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:26.532491   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:26.710150   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:42:26.720589   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:27.350280   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:42:27.352353   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:27.355001   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:27.532713   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:27.707670   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:42:27.721445   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:28.032432   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:28.207647   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:42:28.221922   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:28.532984   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:28.708591   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:42:28.720682   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:29.033227   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:29.207805   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:42:29.221210   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:29.533105   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:29.708643   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:42:29.720668   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:30.032095   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:30.207674   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:42:30.220823   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:30.533226   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:30.708073   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:42:30.721174   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:31.032135   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:31.208654   14730 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:42:31.222125   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:31.535943   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:31.707487   14730 kapi.go:107] duration metric: took 1m15.507001877s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0229 17:42:31.720806   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:32.032655   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:32.221709   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:32.533814   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:32.724967   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:33.032252   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:33.224787   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:33.533017   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:33.721149   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:34.032816   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:34.221940   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:34.532511   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:34.723728   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:35.032734   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:35.221490   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:35.532497   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:35.721704   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:36.032350   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:36.224464   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:36.534141   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:36.721693   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:37.032962   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:37.221363   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:37.536271   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:37.721593   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:38.036363   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:38.222169   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:38.547529   14730 kapi.go:107] duration metric: took 1m17.519221845s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0229 17:42:38.549394   14730 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-848237 cluster.
	I0229 17:42:38.550821   14730 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0229 17:42:38.552241   14730 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0229 17:42:38.721069   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:39.222115   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:39.721157   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:40.583713   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:40.722304   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:41.222511   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:41.722279   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:42.227855   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:42.761781   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:43.226659   14730 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:43.720715   14730 kapi.go:107] duration metric: took 1m24.505454905s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0229 17:42:43.722384   14730 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, ingress-dns, storage-provisioner, helm-tiller, yakd, metrics-server, inspektor-gadget, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0229 17:42:43.723585   14730 addons.go:505] enable addons completed in 1m37.143893875s: enabled=[nvidia-device-plugin cloud-spanner ingress-dns storage-provisioner helm-tiller yakd metrics-server inspektor-gadget storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0229 17:42:43.723617   14730 start.go:233] waiting for cluster config update ...
	I0229 17:42:43.723642   14730 start.go:242] writing updated cluster config ...
	I0229 17:42:43.723921   14730 ssh_runner.go:195] Run: rm -f paused
	I0229 17:42:43.774641   14730 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0229 17:42:43.776304   14730 out.go:177] * Done! kubectl is now configured to use "addons-848237" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Feb 29 17:45:52 addons-848237 crio[671]: time="2024-02-29 17:45:52.587741510Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709228752587714484,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:565686,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=aefc8507-2216-4a73-ba4e-714b03f9abb6 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 17:45:52 addons-848237 crio[671]: time="2024-02-29 17:45:52.588688669Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7a2367f5-6e56-4b98-8289-e34bebe3c619 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 17:45:52 addons-848237 crio[671]: time="2024-02-29 17:45:52.588755896Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7a2367f5-6e56-4b98-8289-e34bebe3c619 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 17:45:52 addons-848237 crio[671]: time="2024-02-29 17:45:52.589056936Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d84de4d35b7862b48dec29632f482f6f963e3ba2a0e7f9e7d6faecc1b877ff2e,PodSandboxId:cb2a3c9f50453187627fc4b922308a19ec319b47cbdbae4a59942f04ba10b1de,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1709228745231519830,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-5dnt4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f0844bd3-8bb9-4f6f-a58a-5de22232d314,},Annotations:map[string]string{io.kubernetes.container.hash: 431e904e,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aef9fbc5a0f84a7f9c9893e4997791242648368398374538f69e0c288d2d8737,PodSandboxId:4f24a8bb2deb27728d547cbc4e2920b0a7fa0ad6301a66d872d7c5ea3013dbad,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6913ed9ec8d009744018c1740879327fe2e085935b2cce7a234bf05347b670d7,State:CONTAINER_RUNNING,CreatedAt:1709228602935402283,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4e49acc6-b997-4f27-b129-34cfa10cb8cb,},Annotations:map[string]string{io.kubern
etes.container.hash: 57f1d1ca,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b93c4efa75f8f84eea8d74feff6201eb9d9fe55c26880e7719c9878650310883,PodSandboxId:e96407ce452468d3ad12afc0bc79873cfa63eb42547d3ddb2a0ee1c2d8627a2f,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3cb09943f099d7eadf10e50e2be686eaa43df402d5e9f3369164bb7f69d8fc79,State:CONTAINER_RUNNING,CreatedAt:1709228585224702778,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7ddfbb94ff-7tv5l,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.
uid: 51e32711-5762-4a9b-934a-dcb5b85938af,},Annotations:map[string]string{io.kubernetes.container.hash: 2fef238d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea58744ea337ede85bd99797e7b1ca59e5355a7226783e82d0f88b6e45a25f46,PodSandboxId:cba18d49ebc19d4cef21ff4dc19a67562876eb807e9719ccce22524675834a61,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:01b0de782aa30e7fc91ac5a91b5cc35e95e9679dee7ef07af06457b471f88f32,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc1caf62c3016e48310e3e283eb11c9ecd4da232e9176c095794541232492b7c,State:CONTAINER_RUNNING,CreatedAt:1709228557882950853,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5f6b4f85fd-2b9zg,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 2f2830c7-159f-4d7e-85d7-d3177e96848f,},Annotations:map[string]string{io.kubernetes.container.hash: ad76735d,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b91b6acdb7806efb68df153cbfad720ef7ccdc66578ee586cf7d30c6b5c7f52e,PodSandboxId:8b240772f752896a5707f3d4fe94084304b1332d2214de5cbfc37a85c9cb53c4,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:eb825d2bb76b9bd44057dcec57a768cfda70562cb08d84eb201107a17b86b87b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:eb825d2bb76b9bd44057dcec57a768cfda70562cb08d84eb201107a17b86b87b,State:CONTAINER_EXITED,CreatedAt:1709228536885550796,Labels:map[strin
g]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-ddxvc,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 917f5fea-b169-4fae-a1df-86b5a122e29f,},Annotations:map[string]string{io.kubernetes.container.hash: b763e23a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf078d06d86aae573b06fb831c432d51f42b1a6c6fd53cee58aa82e5b8a5283e,PodSandboxId:65e322853dcfbae4720fbea048a65f7359164aa38fbe2e0f8a0f74c51ca9c2c0,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:17c938714b4ca5a5a3a875565ecc8e56b94ebc8c132ba24c08dd1ef3c92bc39e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:eb825d2bb76b9bd44057dcec57a768cfda70562cb08d84eb201107a17b86b87b,State:CONTAINER_EXITED,CreatedAt:
1709228536027556443,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-vl2zr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: bbdf6b87-ad24-4ae8-8a78-c359cbe80f65,},Annotations:map[string]string{io.kubernetes.container.hash: 43b54ad6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad34b0c9f481b9ec48c521bf8c421880b1d7106b02f6df57cab09c9454a3becb,PodSandboxId:d7d479ba6e5c6a00153ef29cb932e4a1df4e2fb064220e36bcdde61fc1c615bf,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,
CreatedAt:1709228524387629308,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-s7mv2,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 7b17e5c9-b2c3-48df-bac5-526e28913fda,},Annotations:map[string]string{io.kubernetes.container.hash: 3c030965,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:349cfe6c090e3e4a66f0ee90cd0498a7acd8394aac42e3cad8e74fb650348152,PodSandboxId:16c22a1c340403740ca380d07a0efa55f8d4d51ec34766e212f24af74bdcdf7a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f
40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709228476655375218,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d7c049f-ea7d-4015-b329-8a2d4bff29d7,},Annotations:map[string]string{io.kubernetes.container.hash: ba17c5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a2f692bf619364a5ea1911921413b0e246cce66f2f56bf31d02f478eedc26fa,PodSandboxId:b2ce76c5d4493b17226759d01be67f42c381fb512e39c8411439013a798c4030,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6
e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709228470988416483,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-chwkn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a75092ae-0227-4c0b-ae9d-5f885b87f382,},Annotations:map[string]string{io.kubernetes.container.hash: 71e02798,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a542069dc74f377347dc96320409d53427bb9f0696b72a44f3ff7d786b89677,PodSandboxId:6ea041a565f0ded31562cd8cb50a004fd7322ce5fd10c8a43b2264efcc3e8128,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,
},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709228469833719685,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hjjrx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41fca01a-d52e-43f4-a5ba-57b73e13d971,},Annotations:map[string]string{io.kubernetes.container.hash: dbd63a0c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43ba6519e24c5cb2bf71081b812a9d1f406fa95b244df463710d663da7efa612,PodSandboxId:932b0eec754d26f67c6e7e88929ec31dfde478ccfd7c77c745d9ec72daa44d36,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f70253
2592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709228448834272646,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-848237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bde510852a7323eefc6e6327b658ca56,},Annotations:map[string]string{io.kubernetes.container.hash: a3b4d800,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:877ba6c66aace1da89f7a68f399f8914fc312fc76e1697b408ceb02a57ecc48a,PodSandboxId:b63d4068ee1a0c83507444b94abd778b93ae7af1278b41f4cbf995b9e7750da5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8e
a7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709228448832998567,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-848237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 168d8601412944728b39c9823374d2f4,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e24d6c0095bac9ae8fc75b1ffaa6119a8bef7fd51e9e38a78d6ba801d94792d,PodSandboxId:72635eed945e10e92a9af004ab046d5ea3a17499eae9517f44a3810bc31eafeb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b8
81d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709228448746746414,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-848237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2adccbb7d8bb568bee3b62d3329a764b,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76d2f10c6422235ac791355cde3c3e91197ea588c761ac64b4d60a1e71fd2734,PodSandboxId:ebe2aebc2f65ad68c20d04a91e252efe899642b48fa3cb38482cf467965f587a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db63
50c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709228448761616628,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-848237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0703dc73308092a04d0cb7198d47a774,},Annotations:map[string]string{io.kubernetes.container.hash: e88fb786,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7a2367f5-6e56-4b98-8289-e34bebe3c619 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 17:45:52 addons-848237 crio[671]: time="2024-02-29 17:45:52.631029382Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e181040d-70ec-42e7-896f-a37887feea2e name=/runtime.v1.RuntimeService/Version
	Feb 29 17:45:52 addons-848237 crio[671]: time="2024-02-29 17:45:52.631098115Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e181040d-70ec-42e7-896f-a37887feea2e name=/runtime.v1.RuntimeService/Version
	Feb 29 17:45:52 addons-848237 crio[671]: time="2024-02-29 17:45:52.633292539Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9f861b70-9756-44a1-bde8-5d00d19c08ea name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 17:45:52 addons-848237 crio[671]: time="2024-02-29 17:45:52.635160508Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709228752635083235,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:565686,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9f861b70-9756-44a1-bde8-5d00d19c08ea name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 17:45:52 addons-848237 crio[671]: time="2024-02-29 17:45:52.635759758Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9b9f98d7-eae9-4642-a2d1-55f717fa67b7 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 17:45:52 addons-848237 crio[671]: time="2024-02-29 17:45:52.635813537Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9b9f98d7-eae9-4642-a2d1-55f717fa67b7 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 17:45:52 addons-848237 crio[671]: time="2024-02-29 17:45:52.636423190Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d84de4d35b7862b48dec29632f482f6f963e3ba2a0e7f9e7d6faecc1b877ff2e,PodSandboxId:cb2a3c9f50453187627fc4b922308a19ec319b47cbdbae4a59942f04ba10b1de,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1709228745231519830,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-5dnt4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f0844bd3-8bb9-4f6f-a58a-5de22232d314,},Annotations:map[string]string{io.kubernetes.container.hash: 431e904e,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aef9fbc5a0f84a7f9c9893e4997791242648368398374538f69e0c288d2d8737,PodSandboxId:4f24a8bb2deb27728d547cbc4e2920b0a7fa0ad6301a66d872d7c5ea3013dbad,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6913ed9ec8d009744018c1740879327fe2e085935b2cce7a234bf05347b670d7,State:CONTAINER_RUNNING,CreatedAt:1709228602935402283,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4e49acc6-b997-4f27-b129-34cfa10cb8cb,},Annotations:map[string]string{io.kubern
etes.container.hash: 57f1d1ca,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b93c4efa75f8f84eea8d74feff6201eb9d9fe55c26880e7719c9878650310883,PodSandboxId:e96407ce452468d3ad12afc0bc79873cfa63eb42547d3ddb2a0ee1c2d8627a2f,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3cb09943f099d7eadf10e50e2be686eaa43df402d5e9f3369164bb7f69d8fc79,State:CONTAINER_RUNNING,CreatedAt:1709228585224702778,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7ddfbb94ff-7tv5l,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.
uid: 51e32711-5762-4a9b-934a-dcb5b85938af,},Annotations:map[string]string{io.kubernetes.container.hash: 2fef238d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea58744ea337ede85bd99797e7b1ca59e5355a7226783e82d0f88b6e45a25f46,PodSandboxId:cba18d49ebc19d4cef21ff4dc19a67562876eb807e9719ccce22524675834a61,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:01b0de782aa30e7fc91ac5a91b5cc35e95e9679dee7ef07af06457b471f88f32,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc1caf62c3016e48310e3e283eb11c9ecd4da232e9176c095794541232492b7c,State:CONTAINER_RUNNING,CreatedAt:1709228557882950853,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5f6b4f85fd-2b9zg,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 2f2830c7-159f-4d7e-85d7-d3177e96848f,},Annotations:map[string]string{io.kubernetes.container.hash: ad76735d,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b91b6acdb7806efb68df153cbfad720ef7ccdc66578ee586cf7d30c6b5c7f52e,PodSandboxId:8b240772f752896a5707f3d4fe94084304b1332d2214de5cbfc37a85c9cb53c4,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:eb825d2bb76b9bd44057dcec57a768cfda70562cb08d84eb201107a17b86b87b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:eb825d2bb76b9bd44057dcec57a768cfda70562cb08d84eb201107a17b86b87b,State:CONTAINER_EXITED,CreatedAt:1709228536885550796,Labels:map[strin
g]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-ddxvc,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 917f5fea-b169-4fae-a1df-86b5a122e29f,},Annotations:map[string]string{io.kubernetes.container.hash: b763e23a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf078d06d86aae573b06fb831c432d51f42b1a6c6fd53cee58aa82e5b8a5283e,PodSandboxId:65e322853dcfbae4720fbea048a65f7359164aa38fbe2e0f8a0f74c51ca9c2c0,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:17c938714b4ca5a5a3a875565ecc8e56b94ebc8c132ba24c08dd1ef3c92bc39e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:eb825d2bb76b9bd44057dcec57a768cfda70562cb08d84eb201107a17b86b87b,State:CONTAINER_EXITED,CreatedAt:
1709228536027556443,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-vl2zr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: bbdf6b87-ad24-4ae8-8a78-c359cbe80f65,},Annotations:map[string]string{io.kubernetes.container.hash: 43b54ad6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad34b0c9f481b9ec48c521bf8c421880b1d7106b02f6df57cab09c9454a3becb,PodSandboxId:d7d479ba6e5c6a00153ef29cb932e4a1df4e2fb064220e36bcdde61fc1c615bf,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,
CreatedAt:1709228524387629308,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-s7mv2,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 7b17e5c9-b2c3-48df-bac5-526e28913fda,},Annotations:map[string]string{io.kubernetes.container.hash: 3c030965,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:349cfe6c090e3e4a66f0ee90cd0498a7acd8394aac42e3cad8e74fb650348152,PodSandboxId:16c22a1c340403740ca380d07a0efa55f8d4d51ec34766e212f24af74bdcdf7a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f
40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709228476655375218,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d7c049f-ea7d-4015-b329-8a2d4bff29d7,},Annotations:map[string]string{io.kubernetes.container.hash: ba17c5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a2f692bf619364a5ea1911921413b0e246cce66f2f56bf31d02f478eedc26fa,PodSandboxId:b2ce76c5d4493b17226759d01be67f42c381fb512e39c8411439013a798c4030,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6
e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709228470988416483,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-chwkn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a75092ae-0227-4c0b-ae9d-5f885b87f382,},Annotations:map[string]string{io.kubernetes.container.hash: 71e02798,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a542069dc74f377347dc96320409d53427bb9f0696b72a44f3ff7d786b89677,PodSandboxId:6ea041a565f0ded31562cd8cb50a004fd7322ce5fd10c8a43b2264efcc3e8128,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,
},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709228469833719685,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hjjrx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41fca01a-d52e-43f4-a5ba-57b73e13d971,},Annotations:map[string]string{io.kubernetes.container.hash: dbd63a0c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43ba6519e24c5cb2bf71081b812a9d1f406fa95b244df463710d663da7efa612,PodSandboxId:932b0eec754d26f67c6e7e88929ec31dfde478ccfd7c77c745d9ec72daa44d36,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f70253
2592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709228448834272646,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-848237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bde510852a7323eefc6e6327b658ca56,},Annotations:map[string]string{io.kubernetes.container.hash: a3b4d800,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:877ba6c66aace1da89f7a68f399f8914fc312fc76e1697b408ceb02a57ecc48a,PodSandboxId:b63d4068ee1a0c83507444b94abd778b93ae7af1278b41f4cbf995b9e7750da5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8e
a7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709228448832998567,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-848237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 168d8601412944728b39c9823374d2f4,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e24d6c0095bac9ae8fc75b1ffaa6119a8bef7fd51e9e38a78d6ba801d94792d,PodSandboxId:72635eed945e10e92a9af004ab046d5ea3a17499eae9517f44a3810bc31eafeb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b8
81d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709228448746746414,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-848237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2adccbb7d8bb568bee3b62d3329a764b,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76d2f10c6422235ac791355cde3c3e91197ea588c761ac64b4d60a1e71fd2734,PodSandboxId:ebe2aebc2f65ad68c20d04a91e252efe899642b48fa3cb38482cf467965f587a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db63
50c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709228448761616628,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-848237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0703dc73308092a04d0cb7198d47a774,},Annotations:map[string]string{io.kubernetes.container.hash: e88fb786,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9b9f98d7-eae9-4642-a2d1-55f717fa67b7 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 17:45:52 addons-848237 crio[671]: time="2024-02-29 17:45:52.673821496Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=922c98b5-5f5b-48c3-a73b-246c06e95afa name=/runtime.v1.RuntimeService/Version
	Feb 29 17:45:52 addons-848237 crio[671]: time="2024-02-29 17:45:52.673917880Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=922c98b5-5f5b-48c3-a73b-246c06e95afa name=/runtime.v1.RuntimeService/Version
	Feb 29 17:45:52 addons-848237 crio[671]: time="2024-02-29 17:45:52.675202914Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6b800b49-28a8-4e08-a405-eac0330d12eb name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 17:45:52 addons-848237 crio[671]: time="2024-02-29 17:45:52.676747149Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709228752676716156,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:565686,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6b800b49-28a8-4e08-a405-eac0330d12eb name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 17:45:52 addons-848237 crio[671]: time="2024-02-29 17:45:52.677547460Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5757c1f5-778e-4912-baae-1d577b161232 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 17:45:52 addons-848237 crio[671]: time="2024-02-29 17:45:52.677605232Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5757c1f5-778e-4912-baae-1d577b161232 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 17:45:52 addons-848237 crio[671]: time="2024-02-29 17:45:52.678362920Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d84de4d35b7862b48dec29632f482f6f963e3ba2a0e7f9e7d6faecc1b877ff2e,PodSandboxId:cb2a3c9f50453187627fc4b922308a19ec319b47cbdbae4a59942f04ba10b1de,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1709228745231519830,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-5dnt4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f0844bd3-8bb9-4f6f-a58a-5de22232d314,},Annotations:map[string]string{io.kubernetes.container.hash: 431e904e,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aef9fbc5a0f84a7f9c9893e4997791242648368398374538f69e0c288d2d8737,PodSandboxId:4f24a8bb2deb27728d547cbc4e2920b0a7fa0ad6301a66d872d7c5ea3013dbad,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6913ed9ec8d009744018c1740879327fe2e085935b2cce7a234bf05347b670d7,State:CONTAINER_RUNNING,CreatedAt:1709228602935402283,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4e49acc6-b997-4f27-b129-34cfa10cb8cb,},Annotations:map[string]string{io.kubern
etes.container.hash: 57f1d1ca,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b93c4efa75f8f84eea8d74feff6201eb9d9fe55c26880e7719c9878650310883,PodSandboxId:e96407ce452468d3ad12afc0bc79873cfa63eb42547d3ddb2a0ee1c2d8627a2f,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3cb09943f099d7eadf10e50e2be686eaa43df402d5e9f3369164bb7f69d8fc79,State:CONTAINER_RUNNING,CreatedAt:1709228585224702778,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7ddfbb94ff-7tv5l,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.
uid: 51e32711-5762-4a9b-934a-dcb5b85938af,},Annotations:map[string]string{io.kubernetes.container.hash: 2fef238d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea58744ea337ede85bd99797e7b1ca59e5355a7226783e82d0f88b6e45a25f46,PodSandboxId:cba18d49ebc19d4cef21ff4dc19a67562876eb807e9719ccce22524675834a61,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:01b0de782aa30e7fc91ac5a91b5cc35e95e9679dee7ef07af06457b471f88f32,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc1caf62c3016e48310e3e283eb11c9ecd4da232e9176c095794541232492b7c,State:CONTAINER_RUNNING,CreatedAt:1709228557882950853,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5f6b4f85fd-2b9zg,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 2f2830c7-159f-4d7e-85d7-d3177e96848f,},Annotations:map[string]string{io.kubernetes.container.hash: ad76735d,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b91b6acdb7806efb68df153cbfad720ef7ccdc66578ee586cf7d30c6b5c7f52e,PodSandboxId:8b240772f752896a5707f3d4fe94084304b1332d2214de5cbfc37a85c9cb53c4,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:eb825d2bb76b9bd44057dcec57a768cfda70562cb08d84eb201107a17b86b87b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:eb825d2bb76b9bd44057dcec57a768cfda70562cb08d84eb201107a17b86b87b,State:CONTAINER_EXITED,CreatedAt:1709228536885550796,Labels:map[strin
g]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-ddxvc,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 917f5fea-b169-4fae-a1df-86b5a122e29f,},Annotations:map[string]string{io.kubernetes.container.hash: b763e23a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf078d06d86aae573b06fb831c432d51f42b1a6c6fd53cee58aa82e5b8a5283e,PodSandboxId:65e322853dcfbae4720fbea048a65f7359164aa38fbe2e0f8a0f74c51ca9c2c0,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:17c938714b4ca5a5a3a875565ecc8e56b94ebc8c132ba24c08dd1ef3c92bc39e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:eb825d2bb76b9bd44057dcec57a768cfda70562cb08d84eb201107a17b86b87b,State:CONTAINER_EXITED,CreatedAt:
1709228536027556443,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-vl2zr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: bbdf6b87-ad24-4ae8-8a78-c359cbe80f65,},Annotations:map[string]string{io.kubernetes.container.hash: 43b54ad6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad34b0c9f481b9ec48c521bf8c421880b1d7106b02f6df57cab09c9454a3becb,PodSandboxId:d7d479ba6e5c6a00153ef29cb932e4a1df4e2fb064220e36bcdde61fc1c615bf,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,
CreatedAt:1709228524387629308,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-s7mv2,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 7b17e5c9-b2c3-48df-bac5-526e28913fda,},Annotations:map[string]string{io.kubernetes.container.hash: 3c030965,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:349cfe6c090e3e4a66f0ee90cd0498a7acd8394aac42e3cad8e74fb650348152,PodSandboxId:16c22a1c340403740ca380d07a0efa55f8d4d51ec34766e212f24af74bdcdf7a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f
40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709228476655375218,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d7c049f-ea7d-4015-b329-8a2d4bff29d7,},Annotations:map[string]string{io.kubernetes.container.hash: ba17c5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a2f692bf619364a5ea1911921413b0e246cce66f2f56bf31d02f478eedc26fa,PodSandboxId:b2ce76c5d4493b17226759d01be67f42c381fb512e39c8411439013a798c4030,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6
e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709228470988416483,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-chwkn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a75092ae-0227-4c0b-ae9d-5f885b87f382,},Annotations:map[string]string{io.kubernetes.container.hash: 71e02798,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a542069dc74f377347dc96320409d53427bb9f0696b72a44f3ff7d786b89677,PodSandboxId:6ea041a565f0ded31562cd8cb50a004fd7322ce5fd10c8a43b2264efcc3e8128,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,
},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709228469833719685,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hjjrx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41fca01a-d52e-43f4-a5ba-57b73e13d971,},Annotations:map[string]string{io.kubernetes.container.hash: dbd63a0c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43ba6519e24c5cb2bf71081b812a9d1f406fa95b244df463710d663da7efa612,PodSandboxId:932b0eec754d26f67c6e7e88929ec31dfde478ccfd7c77c745d9ec72daa44d36,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f70253
2592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709228448834272646,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-848237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bde510852a7323eefc6e6327b658ca56,},Annotations:map[string]string{io.kubernetes.container.hash: a3b4d800,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:877ba6c66aace1da89f7a68f399f8914fc312fc76e1697b408ceb02a57ecc48a,PodSandboxId:b63d4068ee1a0c83507444b94abd778b93ae7af1278b41f4cbf995b9e7750da5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8e
a7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709228448832998567,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-848237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 168d8601412944728b39c9823374d2f4,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e24d6c0095bac9ae8fc75b1ffaa6119a8bef7fd51e9e38a78d6ba801d94792d,PodSandboxId:72635eed945e10e92a9af004ab046d5ea3a17499eae9517f44a3810bc31eafeb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b8
81d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709228448746746414,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-848237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2adccbb7d8bb568bee3b62d3329a764b,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76d2f10c6422235ac791355cde3c3e91197ea588c761ac64b4d60a1e71fd2734,PodSandboxId:ebe2aebc2f65ad68c20d04a91e252efe899642b48fa3cb38482cf467965f587a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db63
50c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709228448761616628,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-848237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0703dc73308092a04d0cb7198d47a774,},Annotations:map[string]string{io.kubernetes.container.hash: e88fb786,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5757c1f5-778e-4912-baae-1d577b161232 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 17:45:52 addons-848237 crio[671]: time="2024-02-29 17:45:52.723790195Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0016aa90-127d-44ea-b750-15994bf637b9 name=/runtime.v1.RuntimeService/Version
	Feb 29 17:45:52 addons-848237 crio[671]: time="2024-02-29 17:45:52.723911814Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0016aa90-127d-44ea-b750-15994bf637b9 name=/runtime.v1.RuntimeService/Version
	Feb 29 17:45:52 addons-848237 crio[671]: time="2024-02-29 17:45:52.725291770Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e471c02b-f0ed-49e2-a0e2-38fccd4b5fff name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 17:45:52 addons-848237 crio[671]: time="2024-02-29 17:45:52.726527363Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709228752726497439,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:565686,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e471c02b-f0ed-49e2-a0e2-38fccd4b5fff name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 17:45:52 addons-848237 crio[671]: time="2024-02-29 17:45:52.727800629Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c62fa5b9-2711-4396-8b77-115ec31e72a6 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 17:45:52 addons-848237 crio[671]: time="2024-02-29 17:45:52.727863872Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c62fa5b9-2711-4396-8b77-115ec31e72a6 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 17:45:52 addons-848237 crio[671]: time="2024-02-29 17:45:52.728313171Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d84de4d35b7862b48dec29632f482f6f963e3ba2a0e7f9e7d6faecc1b877ff2e,PodSandboxId:cb2a3c9f50453187627fc4b922308a19ec319b47cbdbae4a59942f04ba10b1de,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1709228745231519830,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-5dnt4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f0844bd3-8bb9-4f6f-a58a-5de22232d314,},Annotations:map[string]string{io.kubernetes.container.hash: 431e904e,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aef9fbc5a0f84a7f9c9893e4997791242648368398374538f69e0c288d2d8737,PodSandboxId:4f24a8bb2deb27728d547cbc4e2920b0a7fa0ad6301a66d872d7c5ea3013dbad,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6913ed9ec8d009744018c1740879327fe2e085935b2cce7a234bf05347b670d7,State:CONTAINER_RUNNING,CreatedAt:1709228602935402283,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4e49acc6-b997-4f27-b129-34cfa10cb8cb,},Annotations:map[string]string{io.kubern
etes.container.hash: 57f1d1ca,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b93c4efa75f8f84eea8d74feff6201eb9d9fe55c26880e7719c9878650310883,PodSandboxId:e96407ce452468d3ad12afc0bc79873cfa63eb42547d3ddb2a0ee1c2d8627a2f,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3cb09943f099d7eadf10e50e2be686eaa43df402d5e9f3369164bb7f69d8fc79,State:CONTAINER_RUNNING,CreatedAt:1709228585224702778,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7ddfbb94ff-7tv5l,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.
uid: 51e32711-5762-4a9b-934a-dcb5b85938af,},Annotations:map[string]string{io.kubernetes.container.hash: 2fef238d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea58744ea337ede85bd99797e7b1ca59e5355a7226783e82d0f88b6e45a25f46,PodSandboxId:cba18d49ebc19d4cef21ff4dc19a67562876eb807e9719ccce22524675834a61,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:01b0de782aa30e7fc91ac5a91b5cc35e95e9679dee7ef07af06457b471f88f32,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc1caf62c3016e48310e3e283eb11c9ecd4da232e9176c095794541232492b7c,State:CONTAINER_RUNNING,CreatedAt:1709228557882950853,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5f6b4f85fd-2b9zg,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 2f2830c7-159f-4d7e-85d7-d3177e96848f,},Annotations:map[string]string{io.kubernetes.container.hash: ad76735d,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b91b6acdb7806efb68df153cbfad720ef7ccdc66578ee586cf7d30c6b5c7f52e,PodSandboxId:8b240772f752896a5707f3d4fe94084304b1332d2214de5cbfc37a85c9cb53c4,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:eb825d2bb76b9bd44057dcec57a768cfda70562cb08d84eb201107a17b86b87b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:eb825d2bb76b9bd44057dcec57a768cfda70562cb08d84eb201107a17b86b87b,State:CONTAINER_EXITED,CreatedAt:1709228536885550796,Labels:map[strin
g]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-ddxvc,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 917f5fea-b169-4fae-a1df-86b5a122e29f,},Annotations:map[string]string{io.kubernetes.container.hash: b763e23a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf078d06d86aae573b06fb831c432d51f42b1a6c6fd53cee58aa82e5b8a5283e,PodSandboxId:65e322853dcfbae4720fbea048a65f7359164aa38fbe2e0f8a0f74c51ca9c2c0,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:17c938714b4ca5a5a3a875565ecc8e56b94ebc8c132ba24c08dd1ef3c92bc39e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:eb825d2bb76b9bd44057dcec57a768cfda70562cb08d84eb201107a17b86b87b,State:CONTAINER_EXITED,CreatedAt:
1709228536027556443,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-vl2zr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: bbdf6b87-ad24-4ae8-8a78-c359cbe80f65,},Annotations:map[string]string{io.kubernetes.container.hash: 43b54ad6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad34b0c9f481b9ec48c521bf8c421880b1d7106b02f6df57cab09c9454a3becb,PodSandboxId:d7d479ba6e5c6a00153ef29cb932e4a1df4e2fb064220e36bcdde61fc1c615bf,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,
CreatedAt:1709228524387629308,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-s7mv2,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 7b17e5c9-b2c3-48df-bac5-526e28913fda,},Annotations:map[string]string{io.kubernetes.container.hash: 3c030965,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:349cfe6c090e3e4a66f0ee90cd0498a7acd8394aac42e3cad8e74fb650348152,PodSandboxId:16c22a1c340403740ca380d07a0efa55f8d4d51ec34766e212f24af74bdcdf7a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f
40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709228476655375218,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d7c049f-ea7d-4015-b329-8a2d4bff29d7,},Annotations:map[string]string{io.kubernetes.container.hash: ba17c5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a2f692bf619364a5ea1911921413b0e246cce66f2f56bf31d02f478eedc26fa,PodSandboxId:b2ce76c5d4493b17226759d01be67f42c381fb512e39c8411439013a798c4030,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6
e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709228470988416483,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-chwkn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a75092ae-0227-4c0b-ae9d-5f885b87f382,},Annotations:map[string]string{io.kubernetes.container.hash: 71e02798,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a542069dc74f377347dc96320409d53427bb9f0696b72a44f3ff7d786b89677,PodSandboxId:6ea041a565f0ded31562cd8cb50a004fd7322ce5fd10c8a43b2264efcc3e8128,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,
},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709228469833719685,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hjjrx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41fca01a-d52e-43f4-a5ba-57b73e13d971,},Annotations:map[string]string{io.kubernetes.container.hash: dbd63a0c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43ba6519e24c5cb2bf71081b812a9d1f406fa95b244df463710d663da7efa612,PodSandboxId:932b0eec754d26f67c6e7e88929ec31dfde478ccfd7c77c745d9ec72daa44d36,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f70253
2592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709228448834272646,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-848237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bde510852a7323eefc6e6327b658ca56,},Annotations:map[string]string{io.kubernetes.container.hash: a3b4d800,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:877ba6c66aace1da89f7a68f399f8914fc312fc76e1697b408ceb02a57ecc48a,PodSandboxId:b63d4068ee1a0c83507444b94abd778b93ae7af1278b41f4cbf995b9e7750da5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8e
a7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709228448832998567,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-848237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 168d8601412944728b39c9823374d2f4,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e24d6c0095bac9ae8fc75b1ffaa6119a8bef7fd51e9e38a78d6ba801d94792d,PodSandboxId:72635eed945e10e92a9af004ab046d5ea3a17499eae9517f44a3810bc31eafeb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b8
81d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709228448746746414,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-848237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2adccbb7d8bb568bee3b62d3329a764b,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76d2f10c6422235ac791355cde3c3e91197ea588c761ac64b4d60a1e71fd2734,PodSandboxId:ebe2aebc2f65ad68c20d04a91e252efe899642b48fa3cb38482cf467965f587a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db63
50c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709228448761616628,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-848237,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0703dc73308092a04d0cb7198d47a774,},Annotations:map[string]string{io.kubernetes.container.hash: e88fb786,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c62fa5b9-2711-4396-8b77-115ec31e72a6 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d84de4d35b786       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                      7 seconds ago       Running             hello-world-app           0                   cb2a3c9f50453       hello-world-app-5d77478584-5dnt4
	aef9fbc5a0f84       docker.io/library/nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9                              2 minutes ago       Running             nginx                     0                   4f24a8bb2deb2       nginx
	b93c4efa75f8f       ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67                        2 minutes ago       Running             headlamp                  0                   e96407ce45246       headlamp-7ddfbb94ff-7tv5l
	ea58744ea337e       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:01b0de782aa30e7fc91ac5a91b5cc35e95e9679dee7ef07af06457b471f88f32                 3 minutes ago       Running             gcp-auth                  0                   cba18d49ebc19       gcp-auth-5f6b4f85fd-2b9zg
	b91b6acdb7806       eb825d2bb76b9bd44057dcec57a768cfda70562cb08d84eb201107a17b86b87b                                                             3 minutes ago       Exited              patch                     1                   8b240772f7528       ingress-nginx-admission-patch-ddxvc
	bf078d06d86aa       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:17c938714b4ca5a5a3a875565ecc8e56b94ebc8c132ba24c08dd1ef3c92bc39e   3 minutes ago       Exited              create                    0                   65e322853dcfb       ingress-nginx-admission-create-vl2zr
	ad34b0c9f481b       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                              3 minutes ago       Running             yakd                      0                   d7d479ba6e5c6       yakd-dashboard-9947fc6bf-s7mv2
	349cfe6c090e3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   16c22a1c34040       storage-provisioner
	5a2f692bf6193       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             4 minutes ago       Running             coredns                   0                   b2ce76c5d4493       coredns-5dd5756b68-chwkn
	7a542069dc74f       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                                             4 minutes ago       Running             kube-proxy                0                   6ea041a565f0d       kube-proxy-hjjrx
	43ba6519e24c5       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                             5 minutes ago       Running             etcd                      0                   932b0eec754d2       etcd-addons-848237
	877ba6c66aace       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                                             5 minutes ago       Running             kube-controller-manager   0                   b63d4068ee1a0       kube-controller-manager-addons-848237
	76d2f10c64222       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                                             5 minutes ago       Running             kube-apiserver            0                   ebe2aebc2f65a       kube-apiserver-addons-848237
	7e24d6c0095ba       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                                             5 minutes ago       Running             kube-scheduler            0                   72635eed945e1       kube-scheduler-addons-848237
	
	
	==> coredns [5a2f692bf619364a5ea1911921413b0e246cce66f2f56bf31d02f478eedc26fa] <==
	[INFO] 10.244.0.6:54231 - 31237 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000449047s
	[INFO] 10.244.0.6:37067 - 64487 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000109869s
	[INFO] 10.244.0.6:37067 - 14074 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000443885s
	[INFO] 10.244.0.6:60195 - 46343 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000087648s
	[INFO] 10.244.0.6:60195 - 20482 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000287006s
	[INFO] 10.244.0.6:60173 - 64898 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000202564s
	[INFO] 10.244.0.6:60173 - 3452 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00045963s
	[INFO] 10.244.0.6:56674 - 17283 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000129696s
	[INFO] 10.244.0.6:56674 - 7326 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00027943s
	[INFO] 10.244.0.6:38434 - 63713 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000091787s
	[INFO] 10.244.0.6:38434 - 44259 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00026528s
	[INFO] 10.244.0.6:37335 - 44517 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000064329s
	[INFO] 10.244.0.6:37335 - 48103 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000587308s
	[INFO] 10.244.0.6:33283 - 22538 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000040815s
	[INFO] 10.244.0.6:33283 - 18440 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000032906s
	[INFO] 10.244.0.21:56228 - 34226 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000216029s
	[INFO] 10.244.0.21:53471 - 65222 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000134549s
	[INFO] 10.244.0.21:55453 - 36047 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000144099s
	[INFO] 10.244.0.21:38669 - 47990 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000158247s
	[INFO] 10.244.0.21:35059 - 57557 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000110011s
	[INFO] 10.244.0.21:48746 - 37411 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000073586s
	[INFO] 10.244.0.21:45157 - 31977 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000893985s
	[INFO] 10.244.0.21:60708 - 33301 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 420 0.00065259s
	[INFO] 10.244.0.24:57368 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000233195s
	[INFO] 10.244.0.24:45618 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000071753s
	
	
	==> describe nodes <==
	Name:               addons-848237
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-848237
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f9e09574fdbf0719156a1e892f7aeb8b71f0cf19
	                    minikube.k8s.io/name=addons-848237
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_29T17_40_55_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-848237
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Feb 2024 17:40:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-848237
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Feb 2024 17:45:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Feb 2024 17:43:59 +0000   Thu, 29 Feb 2024 17:40:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Feb 2024 17:43:59 +0000   Thu, 29 Feb 2024 17:40:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Feb 2024 17:43:59 +0000   Thu, 29 Feb 2024 17:40:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Feb 2024 17:43:59 +0000   Thu, 29 Feb 2024 17:40:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.195
	  Hostname:    addons-848237
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912784Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912784Ki
	  pods:               110
	System Info:
	  Machine ID:                 617dd1e4262248be8eef9065a6d9c0da
	  System UUID:                617dd1e4-2622-48be-8eef-9065a6d9c0da
	  Boot ID:                    41657230-7da6-4a3e-8e20-7239814a58fd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-5dnt4         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m41s
	  gcp-auth                    gcp-auth-5f6b4f85fd-2b9zg                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m33s
	  headlamp                    headlamp-7ddfbb94ff-7tv5l                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m55s
	  kube-system                 coredns-5dd5756b68-chwkn                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m46s
	  kube-system                 etcd-addons-848237                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m59s
	  kube-system                 kube-apiserver-addons-848237             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m58s
	  kube-system                 kube-controller-manager-addons-848237    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m58s
	  kube-system                 kube-proxy-hjjrx                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m46s
	  kube-system                 kube-scheduler-addons-848237             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m58s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m38s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-s7mv2           0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     4m38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             298Mi (7%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m41s                kube-proxy       
	  Normal  Starting                 5m5s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m5s (x8 over 5m5s)  kubelet          Node addons-848237 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m5s (x8 over 5m5s)  kubelet          Node addons-848237 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m5s (x7 over 5m5s)  kubelet          Node addons-848237 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m5s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m58s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m58s                kubelet          Node addons-848237 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m58s                kubelet          Node addons-848237 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m58s                kubelet          Node addons-848237 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m58s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m58s                kubelet          Node addons-848237 status is now: NodeReady
	  Normal  RegisteredNode           4m47s                node-controller  Node addons-848237 event: Registered Node addons-848237 in Controller
	
	
	==> dmesg <==
	[Feb29 17:41] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.024395] kauditd_printk_skb: 93 callbacks suppressed
	[  +5.080714] kauditd_printk_skb: 96 callbacks suppressed
	[  +5.303158] kauditd_printk_skb: 33 callbacks suppressed
	[  +9.731834] kauditd_printk_skb: 10 callbacks suppressed
	[ +15.395494] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.489220] kauditd_printk_skb: 6 callbacks suppressed
	[Feb29 17:42] kauditd_printk_skb: 4 callbacks suppressed
	[  +9.776293] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.558410] kauditd_printk_skb: 56 callbacks suppressed
	[  +6.672605] kauditd_printk_skb: 74 callbacks suppressed
	[  +5.067203] kauditd_printk_skb: 7 callbacks suppressed
	[ +11.920027] kauditd_printk_skb: 40 callbacks suppressed
	[  +5.244257] kauditd_printk_skb: 11 callbacks suppressed
	[  +6.082101] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.075777] kauditd_printk_skb: 56 callbacks suppressed
	[Feb29 17:43] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.477228] kauditd_printk_skb: 1 callbacks suppressed
	[  +7.076751] kauditd_printk_skb: 14 callbacks suppressed
	[ +10.817602] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.356907] kauditd_printk_skb: 15 callbacks suppressed
	[  +7.643804] kauditd_printk_skb: 11 callbacks suppressed
	[  +8.535015] kauditd_printk_skb: 25 callbacks suppressed
	[Feb29 17:45] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.721254] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [43ba6519e24c5cb2bf71081b812a9d1f406fa95b244df463710d663da7efa612] <==
	{"level":"warn","ts":"2024-02-29T17:42:27.338966Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"313.714567ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:10840"}
	{"level":"info","ts":"2024-02-29T17:42:27.340574Z","caller":"traceutil/trace.go:171","msg":"trace[1648583145] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1118; }","duration":"315.319788ms","start":"2024-02-29T17:42:27.025237Z","end":"2024-02-29T17:42:27.340557Z","steps":["trace[1648583145] 'agreement among raft nodes before linearized reading'  (duration: 312.841129ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-29T17:42:27.340633Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-02-29T17:42:27.025224Z","time spent":"315.39744ms","remote":"127.0.0.1:47702","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":3,"response size":10863,"request content":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" "}
	{"level":"info","ts":"2024-02-29T17:42:27.339315Z","caller":"traceutil/trace.go:171","msg":"trace[685033701] transaction","detail":"{read_only:false; response_revision:1118; number_of_response:1; }","duration":"359.941039ms","start":"2024-02-29T17:42:26.979365Z","end":"2024-02-29T17:42:27.339306Z","steps":["trace[685033701] 'process raft request'  (duration: 358.657778ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-29T17:42:27.340767Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-02-29T17:42:26.979349Z","time spent":"361.381936ms","remote":"127.0.0.1:47816","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":541,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/addons-848237\" mod_revision:1038 > success:<request_put:<key:\"/registry/leases/kube-node-lease/addons-848237\" value_size:487 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/addons-848237\" > >"}
	{"level":"warn","ts":"2024-02-29T17:42:27.339446Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"138.591191ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:13748"}
	{"level":"info","ts":"2024-02-29T17:42:27.340927Z","caller":"traceutil/trace.go:171","msg":"trace[1898498269] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1118; }","duration":"140.074292ms","start":"2024-02-29T17:42:27.200844Z","end":"2024-02-29T17:42:27.340919Z","steps":["trace[1898498269] 'agreement among raft nodes before linearized reading'  (duration: 138.536ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-29T17:42:27.339553Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.550268ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:81555"}
	{"level":"info","ts":"2024-02-29T17:42:27.341036Z","caller":"traceutil/trace.go:171","msg":"trace[1160473689] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1118; }","duration":"128.033557ms","start":"2024-02-29T17:42:27.212997Z","end":"2024-02-29T17:42:27.341031Z","steps":["trace[1160473689] 'agreement among raft nodes before linearized reading'  (duration: 126.460174ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-29T17:42:27.339579Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"312.747173ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/\" range_end:\"/registry/services/specs0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-02-29T17:42:27.344874Z","caller":"traceutil/trace.go:171","msg":"trace[1776303669] range","detail":"{range_begin:/registry/services/specs/; range_end:/registry/services/specs0; response_count:0; response_revision:1118; }","duration":"318.038268ms","start":"2024-02-29T17:42:27.026827Z","end":"2024-02-29T17:42:27.344865Z","steps":["trace[1776303669] 'agreement among raft nodes before linearized reading'  (duration: 312.736431ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-29T17:42:27.344908Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-02-29T17:42:27.026817Z","time spent":"318.07887ms","remote":"127.0.0.1:47714","response type":"/etcdserverpb.KV/Range","request count":0,"request size":56,"response count":12,"response size":30,"request content":"key:\"/registry/services/specs/\" range_end:\"/registry/services/specs0\" count_only:true "}
	{"level":"warn","ts":"2024-02-29T17:42:40.57149Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"225.585512ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/endpointslices/\" range_end:\"/registry/endpointslices0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-02-29T17:42:40.571574Z","caller":"traceutil/trace.go:171","msg":"trace[425139794] range","detail":"{range_begin:/registry/endpointslices/; range_end:/registry/endpointslices0; response_count:0; response_revision:1176; }","duration":"225.688277ms","start":"2024-02-29T17:42:40.345872Z","end":"2024-02-29T17:42:40.57156Z","steps":["trace[425139794] 'count revisions from in-memory index tree'  (duration: 225.506378ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-29T17:42:40.571757Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"359.285004ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:81555"}
	{"level":"info","ts":"2024-02-29T17:42:40.57179Z","caller":"traceutil/trace.go:171","msg":"trace[219744824] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1176; }","duration":"359.324216ms","start":"2024-02-29T17:42:40.212457Z","end":"2024-02-29T17:42:40.571781Z","steps":["trace[219744824] 'range keys from in-memory index tree'  (duration: 359.140867ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-29T17:42:40.571821Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-02-29T17:42:40.212444Z","time spent":"359.371404ms","remote":"127.0.0.1:47702","response type":"/etcdserverpb.KV/Range","request count":0,"request size":58,"response count":18,"response size":81578,"request content":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" "}
	{"level":"warn","ts":"2024-02-29T17:42:40.571845Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"441.388896ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/gadget/gadget-pxlgz.17b8663e8952024e\" ","response":"range_response_count:1 size:808"}
	{"level":"info","ts":"2024-02-29T17:42:40.571864Z","caller":"traceutil/trace.go:171","msg":"trace[951215269] range","detail":"{range_begin:/registry/events/gadget/gadget-pxlgz.17b8663e8952024e; range_end:; response_count:1; response_revision:1176; }","duration":"441.410892ms","start":"2024-02-29T17:42:40.130447Z","end":"2024-02-29T17:42:40.571858Z","steps":["trace[951215269] 'range keys from in-memory index tree'  (duration: 441.310901ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-29T17:42:40.571881Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-02-29T17:42:40.130433Z","time spent":"441.443903ms","remote":"127.0.0.1:47636","response type":"/etcdserverpb.KV/Range","request count":0,"request size":55,"response count":1,"response size":831,"request content":"key:\"/registry/events/gadget/gadget-pxlgz.17b8663e8952024e\" "}
	{"level":"info","ts":"2024-02-29T17:43:01.038851Z","caller":"traceutil/trace.go:171","msg":"trace[233858422] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1383; }","duration":"224.510226ms","start":"2024-02-29T17:43:00.814325Z","end":"2024-02-29T17:43:01.038835Z","steps":["trace[233858422] 'process raft request'  (duration: 224.36907ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-29T17:43:16.088493Z","caller":"traceutil/trace.go:171","msg":"trace[790633994] transaction","detail":"{read_only:false; response_revision:1512; number_of_response:1; }","duration":"402.715727ms","start":"2024-02-29T17:43:15.685757Z","end":"2024-02-29T17:43:16.088473Z","steps":["trace[790633994] 'process raft request'  (duration: 402.618848ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-29T17:43:16.088624Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-02-29T17:43:15.685742Z","time spent":"402.804493ms","remote":"127.0.0.1:47694","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1510 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-02-29T17:43:16.090264Z","caller":"traceutil/trace.go:171","msg":"trace[389474457] transaction","detail":"{read_only:false; response_revision:1513; number_of_response:1; }","duration":"296.490829ms","start":"2024-02-29T17:43:15.79374Z","end":"2024-02-29T17:43:16.090231Z","steps":["trace[389474457] 'process raft request'  (duration: 296.347852ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-29T17:43:36.944525Z","caller":"traceutil/trace.go:171","msg":"trace[450462390] transaction","detail":"{read_only:false; response_revision:1638; number_of_response:1; }","duration":"114.412983ms","start":"2024-02-29T17:43:36.830097Z","end":"2024-02-29T17:43:36.94451Z","steps":["trace[450462390] 'process raft request'  (duration: 114.250139ms)"],"step_count":1}
	
	
	==> gcp-auth [ea58744ea337ede85bd99797e7b1ca59e5355a7226783e82d0f88b6e45a25f46] <==
	2024/02/29 17:42:38 GCP Auth Webhook started!
	2024/02/29 17:42:44 Ready to marshal response ...
	2024/02/29 17:42:44 Ready to write response ...
	2024/02/29 17:42:44 Ready to marshal response ...
	2024/02/29 17:42:44 Ready to write response ...
	2024/02/29 17:42:54 Ready to marshal response ...
	2024/02/29 17:42:54 Ready to write response ...
	2024/02/29 17:42:55 Ready to marshal response ...
	2024/02/29 17:42:55 Ready to write response ...
	2024/02/29 17:42:57 Ready to marshal response ...
	2024/02/29 17:42:57 Ready to write response ...
	2024/02/29 17:42:57 Ready to marshal response ...
	2024/02/29 17:42:57 Ready to write response ...
	2024/02/29 17:42:58 Ready to marshal response ...
	2024/02/29 17:42:58 Ready to write response ...
	2024/02/29 17:43:10 Ready to marshal response ...
	2024/02/29 17:43:10 Ready to write response ...
	2024/02/29 17:43:12 Ready to marshal response ...
	2024/02/29 17:43:12 Ready to write response ...
	2024/02/29 17:43:28 Ready to marshal response ...
	2024/02/29 17:43:28 Ready to write response ...
	2024/02/29 17:43:32 Ready to marshal response ...
	2024/02/29 17:43:32 Ready to write response ...
	2024/02/29 17:45:41 Ready to marshal response ...
	2024/02/29 17:45:41 Ready to write response ...
	
	
	==> kernel <==
	 17:45:53 up 5 min,  0 users,  load average: 1.70, 1.36, 0.66
	Linux addons-848237 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [76d2f10c6422235ac791355cde3c3e91197ea588c761ac64b4d60a1e71fd2734] <==
	E0229 17:43:11.694476       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0229 17:43:12.336414       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0229 17:43:12.526221       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.98.48.132"}
	I0229 17:43:25.191346       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0229 17:43:43.077622       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"csi-hostpathplugin-sa\" not found]"
	I0229 17:43:49.856702       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0229 17:43:49.856832       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0229 17:43:49.862781       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0229 17:43:49.862856       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0229 17:43:49.874834       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0229 17:43:49.875634       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0229 17:43:49.888821       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0229 17:43:49.889231       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0229 17:43:49.899236       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0229 17:43:49.899296       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0229 17:43:49.917616       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0229 17:43:49.917688       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0229 17:43:49.934381       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0229 17:43:49.934739       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0229 17:43:49.944995       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0229 17:43:49.945051       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0229 17:43:50.890478       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0229 17:43:50.934490       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0229 17:43:50.951349       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0229 17:45:42.066079       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.101.214.59"}
	
	
	==> kube-controller-manager [877ba6c66aace1da89f7a68f399f8914fc312fc76e1697b408ceb02a57ecc48a] <==
	E0229 17:44:24.330052       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0229 17:44:27.437822       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0229 17:44:27.437924       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0229 17:44:28.171229       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0229 17:44:28.171381       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0229 17:44:56.274501       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0229 17:44:56.274565       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0229 17:45:04.977812       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0229 17:45:04.977944       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0229 17:45:10.401985       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0229 17:45:10.402100       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0229 17:45:17.725001       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0229 17:45:17.725177       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0229 17:45:38.837224       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0229 17:45:38.837304       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0229 17:45:41.773491       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0229 17:45:41.808843       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-5dnt4"
	I0229 17:45:41.825321       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="53.002252ms"
	I0229 17:45:41.851858       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="26.490318ms"
	I0229 17:45:41.851970       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="50.407µs"
	I0229 17:45:44.735193       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7967645744" duration="7.752µs"
	I0229 17:45:44.750646       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0229 17:45:44.765843       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0229 17:45:45.873053       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="13.594742ms"
	I0229 17:45:45.873922       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="123.304µs"
	
	
	==> kube-proxy [7a542069dc74f377347dc96320409d53427bb9f0696b72a44f3ff7d786b89677] <==
	I0229 17:41:10.909416       1 server_others.go:69] "Using iptables proxy"
	I0229 17:41:10.933396       1 node.go:141] Successfully retrieved node IP: 192.168.39.195
	I0229 17:41:11.048828       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0229 17:41:11.048871       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0229 17:41:11.061631       1 server_others.go:152] "Using iptables Proxier"
	I0229 17:41:11.061686       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0229 17:41:11.061929       1 server.go:846] "Version info" version="v1.28.4"
	I0229 17:41:11.061963       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0229 17:41:11.063232       1 config.go:188] "Starting service config controller"
	I0229 17:41:11.063245       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0229 17:41:11.063260       1 config.go:97] "Starting endpoint slice config controller"
	I0229 17:41:11.063263       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0229 17:41:11.063555       1 config.go:315] "Starting node config controller"
	I0229 17:41:11.063560       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0229 17:41:11.163561       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0229 17:41:11.163579       1 shared_informer.go:318] Caches are synced for node config
	I0229 17:41:11.163561       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [7e24d6c0095bac9ae8fc75b1ffaa6119a8bef7fd51e9e38a78d6ba801d94792d] <==
	W0229 17:40:51.724288       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0229 17:40:51.724328       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0229 17:40:51.724372       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0229 17:40:51.724408       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0229 17:40:51.724519       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0229 17:40:51.724553       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0229 17:40:51.724603       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0229 17:40:51.724665       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0229 17:40:51.724927       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0229 17:40:51.724964       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0229 17:40:51.725211       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0229 17:40:51.725249       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0229 17:40:52.770938       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0229 17:40:52.771597       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0229 17:40:52.799437       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0229 17:40:52.799509       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0229 17:40:52.864971       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0229 17:40:52.865099       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0229 17:40:52.899890       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0229 17:40:52.900047       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0229 17:40:52.952847       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0229 17:40:52.952896       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0229 17:40:52.981763       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0229 17:40:52.982885       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0229 17:40:54.709365       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 29 17:45:41 addons-848237 kubelet[1201]: I0229 17:45:41.821798    1201 memory_manager.go:346] "RemoveStaleState removing state" podUID="4a4cdfb3-12ee-4bfd-8dd2-36a3f2e5be23" containerName="volume-snapshot-controller"
	Feb 29 17:45:41 addons-848237 kubelet[1201]: I0229 17:45:41.821880    1201 memory_manager.go:346] "RemoveStaleState removing state" podUID="e0c9c1c1-da82-4683-9815-109b818d8551" containerName="csi-external-health-monitor-controller"
	Feb 29 17:45:41 addons-848237 kubelet[1201]: I0229 17:45:41.821949    1201 memory_manager.go:346] "RemoveStaleState removing state" podUID="483406c3-b4c2-4982-ba8c-439a5c2a740c" containerName="csi-resizer"
	Feb 29 17:45:41 addons-848237 kubelet[1201]: I0229 17:45:41.821984    1201 memory_manager.go:346] "RemoveStaleState removing state" podUID="e0c9c1c1-da82-4683-9815-109b818d8551" containerName="liveness-probe"
	Feb 29 17:45:41 addons-848237 kubelet[1201]: I0229 17:45:41.822086    1201 memory_manager.go:346] "RemoveStaleState removing state" podUID="e0c9c1c1-da82-4683-9815-109b818d8551" containerName="csi-provisioner"
	Feb 29 17:45:41 addons-848237 kubelet[1201]: I0229 17:45:41.899537    1201 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/f0844bd3-8bb9-4f6f-a58a-5de22232d314-gcp-creds\") pod \"hello-world-app-5d77478584-5dnt4\" (UID: \"f0844bd3-8bb9-4f6f-a58a-5de22232d314\") " pod="default/hello-world-app-5d77478584-5dnt4"
	Feb 29 17:45:41 addons-848237 kubelet[1201]: I0229 17:45:41.899612    1201 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46fhq\" (UniqueName: \"kubernetes.io/projected/f0844bd3-8bb9-4f6f-a58a-5de22232d314-kube-api-access-46fhq\") pod \"hello-world-app-5d77478584-5dnt4\" (UID: \"f0844bd3-8bb9-4f6f-a58a-5de22232d314\") " pod="default/hello-world-app-5d77478584-5dnt4"
	Feb 29 17:45:43 addons-848237 kubelet[1201]: I0229 17:45:43.113447    1201 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dp5nn\" (UniqueName: \"kubernetes.io/projected/c179431b-fde2-47e6-b813-21fc946a70af-kube-api-access-dp5nn\") pod \"c179431b-fde2-47e6-b813-21fc946a70af\" (UID: \"c179431b-fde2-47e6-b813-21fc946a70af\") "
	Feb 29 17:45:43 addons-848237 kubelet[1201]: I0229 17:45:43.116998    1201 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c179431b-fde2-47e6-b813-21fc946a70af-kube-api-access-dp5nn" (OuterVolumeSpecName: "kube-api-access-dp5nn") pod "c179431b-fde2-47e6-b813-21fc946a70af" (UID: "c179431b-fde2-47e6-b813-21fc946a70af"). InnerVolumeSpecName "kube-api-access-dp5nn". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Feb 29 17:45:43 addons-848237 kubelet[1201]: I0229 17:45:43.214762    1201 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-dp5nn\" (UniqueName: \"kubernetes.io/projected/c179431b-fde2-47e6-b813-21fc946a70af-kube-api-access-dp5nn\") on node \"addons-848237\" DevicePath \"\""
	Feb 29 17:45:43 addons-848237 kubelet[1201]: I0229 17:45:43.772774    1201 scope.go:117] "RemoveContainer" containerID="4acf7328f2d5f4a7e3c3426af090f1ef54a475bd7df51bbec131ecbbb09c5780"
	Feb 29 17:45:43 addons-848237 kubelet[1201]: I0229 17:45:43.809633    1201 scope.go:117] "RemoveContainer" containerID="4acf7328f2d5f4a7e3c3426af090f1ef54a475bd7df51bbec131ecbbb09c5780"
	Feb 29 17:45:43 addons-848237 kubelet[1201]: E0229 17:45:43.810412    1201 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4acf7328f2d5f4a7e3c3426af090f1ef54a475bd7df51bbec131ecbbb09c5780\": container with ID starting with 4acf7328f2d5f4a7e3c3426af090f1ef54a475bd7df51bbec131ecbbb09c5780 not found: ID does not exist" containerID="4acf7328f2d5f4a7e3c3426af090f1ef54a475bd7df51bbec131ecbbb09c5780"
	Feb 29 17:45:43 addons-848237 kubelet[1201]: I0229 17:45:43.810479    1201 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4acf7328f2d5f4a7e3c3426af090f1ef54a475bd7df51bbec131ecbbb09c5780"} err="failed to get container status \"4acf7328f2d5f4a7e3c3426af090f1ef54a475bd7df51bbec131ecbbb09c5780\": rpc error: code = NotFound desc = could not find container \"4acf7328f2d5f4a7e3c3426af090f1ef54a475bd7df51bbec131ecbbb09c5780\": container with ID starting with 4acf7328f2d5f4a7e3c3426af090f1ef54a475bd7df51bbec131ecbbb09c5780 not found: ID does not exist"
	Feb 29 17:45:45 addons-848237 kubelet[1201]: I0229 17:45:45.124610    1201 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="917f5fea-b169-4fae-a1df-86b5a122e29f" path="/var/lib/kubelet/pods/917f5fea-b169-4fae-a1df-86b5a122e29f/volumes"
	Feb 29 17:45:45 addons-848237 kubelet[1201]: I0229 17:45:45.125076    1201 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="bbdf6b87-ad24-4ae8-8a78-c359cbe80f65" path="/var/lib/kubelet/pods/bbdf6b87-ad24-4ae8-8a78-c359cbe80f65/volumes"
	Feb 29 17:45:45 addons-848237 kubelet[1201]: I0229 17:45:45.125694    1201 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="c179431b-fde2-47e6-b813-21fc946a70af" path="/var/lib/kubelet/pods/c179431b-fde2-47e6-b813-21fc946a70af/volumes"
	Feb 29 17:45:48 addons-848237 kubelet[1201]: I0229 17:45:48.054164    1201 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xvm9\" (UniqueName: \"kubernetes.io/projected/297dbe85-6629-4d4c-8aaa-10fb49d11c1a-kube-api-access-4xvm9\") pod \"297dbe85-6629-4d4c-8aaa-10fb49d11c1a\" (UID: \"297dbe85-6629-4d4c-8aaa-10fb49d11c1a\") "
	Feb 29 17:45:48 addons-848237 kubelet[1201]: I0229 17:45:48.054215    1201 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/297dbe85-6629-4d4c-8aaa-10fb49d11c1a-webhook-cert\") pod \"297dbe85-6629-4d4c-8aaa-10fb49d11c1a\" (UID: \"297dbe85-6629-4d4c-8aaa-10fb49d11c1a\") "
	Feb 29 17:45:48 addons-848237 kubelet[1201]: I0229 17:45:48.056514    1201 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/297dbe85-6629-4d4c-8aaa-10fb49d11c1a-kube-api-access-4xvm9" (OuterVolumeSpecName: "kube-api-access-4xvm9") pod "297dbe85-6629-4d4c-8aaa-10fb49d11c1a" (UID: "297dbe85-6629-4d4c-8aaa-10fb49d11c1a"). InnerVolumeSpecName "kube-api-access-4xvm9". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Feb 29 17:45:48 addons-848237 kubelet[1201]: I0229 17:45:48.058273    1201 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/297dbe85-6629-4d4c-8aaa-10fb49d11c1a-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "297dbe85-6629-4d4c-8aaa-10fb49d11c1a" (UID: "297dbe85-6629-4d4c-8aaa-10fb49d11c1a"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Feb 29 17:45:48 addons-848237 kubelet[1201]: I0229 17:45:48.155206    1201 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/297dbe85-6629-4d4c-8aaa-10fb49d11c1a-webhook-cert\") on node \"addons-848237\" DevicePath \"\""
	Feb 29 17:45:48 addons-848237 kubelet[1201]: I0229 17:45:48.155242    1201 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-4xvm9\" (UniqueName: \"kubernetes.io/projected/297dbe85-6629-4d4c-8aaa-10fb49d11c1a-kube-api-access-4xvm9\") on node \"addons-848237\" DevicePath \"\""
	Feb 29 17:45:48 addons-848237 kubelet[1201]: I0229 17:45:48.869369    1201 scope.go:117] "RemoveContainer" containerID="7ef5a2b60487f2c6bf1b641fa9262a260b91513d17fe0d64bc38eb5c06316082"
	Feb 29 17:45:49 addons-848237 kubelet[1201]: I0229 17:45:49.126391    1201 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="297dbe85-6629-4d4c-8aaa-10fb49d11c1a" path="/var/lib/kubelet/pods/297dbe85-6629-4d4c-8aaa-10fb49d11c1a/volumes"
	
	
	==> storage-provisioner [349cfe6c090e3e4a66f0ee90cd0498a7acd8394aac42e3cad8e74fb650348152] <==
	I0229 17:41:17.564597       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0229 17:41:17.755911       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0229 17:41:17.755991       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0229 17:41:17.838800       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0229 17:41:17.838976       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-848237_d4758602-fbba-44d5-83ec-02729cac23a4!
	I0229 17:41:17.839904       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"98bf96de-d621-4b43-bf81-7b0559c8bf3a", APIVersion:"v1", ResourceVersion:"754", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-848237_d4758602-fbba-44d5-83ec-02729cac23a4 became leader
	I0229 17:41:17.943688       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-848237_d4758602-fbba-44d5-83ec-02729cac23a4!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-848237 -n addons-848237
helpers_test.go:261: (dbg) Run:  kubectl --context addons-848237 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (161.90s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.16s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-848237
addons_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-848237: exit status 82 (2m0.270553491s)

                                                
                                                
-- stdout --
	* Stopping node "addons-848237"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:174: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-848237" : exit status 82
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-848237
addons_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-848237: exit status 11 (21.60321733s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.195:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:178: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-848237" : exit status 11
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-848237
addons_test.go:180: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-848237: exit status 11 (6.142976052s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.195:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:182: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-848237" : exit status 11
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-848237
addons_test.go:185: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-848237: exit status 11 (6.143602664s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.195:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:187: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-848237" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.16s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (286.1s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-779504 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E0229 17:54:05.711718   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/addons-848237/client.crt: no such file or directory
E0229 17:55:27.632431   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/addons-848237/client.crt: no such file or directory
E0229 17:57:43.785800   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/addons-848237/client.crt: no such file or directory
E0229 17:57:46.664748   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/functional-531072/client.crt: no such file or directory
E0229 17:57:46.669980   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/functional-531072/client.crt: no such file or directory
E0229 17:57:46.680211   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/functional-531072/client.crt: no such file or directory
E0229 17:57:46.700394   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/functional-531072/client.crt: no such file or directory
E0229 17:57:46.740643   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/functional-531072/client.crt: no such file or directory
E0229 17:57:46.820968   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/functional-531072/client.crt: no such file or directory
E0229 17:57:46.981444   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/functional-531072/client.crt: no such file or directory
E0229 17:57:47.302085   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/functional-531072/client.crt: no such file or directory
E0229 17:57:47.943037   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/functional-531072/client.crt: no such file or directory
E0229 17:57:49.223591   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/functional-531072/client.crt: no such file or directory
E0229 17:57:51.785384   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/functional-531072/client.crt: no such file or directory
E0229 17:57:56.906055   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/functional-531072/client.crt: no such file or directory
E0229 17:58:07.147107   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/functional-531072/client.crt: no such file or directory
E0229 17:58:11.475626   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/addons-848237/client.crt: no such file or directory
E0229 17:58:27.628094   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/functional-531072/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p ingress-addon-legacy-779504 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: exit status 109 (4m46.050709619s)

                                                
                                                
-- stdout --
	* [ingress-addon-legacy-779504] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18259
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18259-6428/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6428/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting control plane node ingress-addon-legacy-779504 in cluster ingress-addon-legacy-779504
	* Downloading Kubernetes v1.18.20 preload ...
	* Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.18.20 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 17:53:44.822088   22893 out.go:291] Setting OutFile to fd 1 ...
	I0229 17:53:44.822208   22893 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 17:53:44.822217   22893 out.go:304] Setting ErrFile to fd 2...
	I0229 17:53:44.822221   22893 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 17:53:44.822909   22893 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6428/.minikube/bin
	I0229 17:53:44.823690   22893 out.go:298] Setting JSON to false
	I0229 17:53:44.824579   22893 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2169,"bootTime":1709227056,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 17:53:44.824634   22893 start.go:139] virtualization: kvm guest
	I0229 17:53:44.826558   22893 out.go:177] * [ingress-addon-legacy-779504] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 17:53:44.828270   22893 notify.go:220] Checking for updates...
	I0229 17:53:44.828298   22893 out.go:177]   - MINIKUBE_LOCATION=18259
	I0229 17:53:44.829827   22893 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 17:53:44.831211   22893 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18259-6428/kubeconfig
	I0229 17:53:44.832507   22893 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6428/.minikube
	I0229 17:53:44.833780   22893 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 17:53:44.835114   22893 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 17:53:44.836542   22893 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 17:53:44.869368   22893 out.go:177] * Using the kvm2 driver based on user configuration
	I0229 17:53:44.870543   22893 start.go:299] selected driver: kvm2
	I0229 17:53:44.870560   22893 start.go:903] validating driver "kvm2" against <nil>
	I0229 17:53:44.870572   22893 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 17:53:44.871302   22893 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 17:53:44.871384   22893 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18259-6428/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 17:53:44.885313   22893 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 17:53:44.885353   22893 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 17:53:44.885534   22893 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0229 17:53:44.885590   22893 cni.go:84] Creating CNI manager for ""
	I0229 17:53:44.885603   22893 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 17:53:44.885611   22893 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0229 17:53:44.885618   22893 start_flags.go:323] config:
	{Name:ingress-addon-legacy-779504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-779504 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 17:53:44.885722   22893 iso.go:125] acquiring lock: {Name:mk4b55cee5696fff78a1389276e4e011ad655e56 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 17:53:44.887343   22893 out.go:177] * Starting control plane node ingress-addon-legacy-779504 in cluster ingress-addon-legacy-779504
	I0229 17:53:44.888441   22893 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0229 17:53:44.997420   22893 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0229 17:53:44.997448   22893 cache.go:56] Caching tarball of preloaded images
	I0229 17:53:44.997591   22893 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0229 17:53:44.999355   22893 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0229 17:53:45.000762   22893 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0229 17:53:45.107445   22893 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4?checksum=md5:0d02e096853189c5b37812b400898e14 -> /home/jenkins/minikube-integration/18259-6428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0229 17:54:01.824989   22893 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0229 17:54:01.825082   22893 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/18259-6428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0229 17:54:02.735434   22893 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I0229 17:54:02.735750   22893 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/ingress-addon-legacy-779504/config.json ...
	I0229 17:54:02.735778   22893 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/ingress-addon-legacy-779504/config.json: {Name:mk2c1df5cfd79f38f1ef804d30c1d62f48ea8589 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 17:54:02.735936   22893 start.go:365] acquiring machines lock for ingress-addon-legacy-779504: {Name:mk08b242c6b6b7cd4a6843a7d3821a8b435540dd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 17:54:02.735968   22893 start.go:369] acquired machines lock for "ingress-addon-legacy-779504" in 17.817µs
	I0229 17:54:02.735985   22893 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-779504 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Ku
bernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-779504 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0229 17:54:02.736072   22893 start.go:125] createHost starting for "" (driver="kvm2")
	I0229 17:54:02.739166   22893 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0229 17:54:02.739329   22893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 17:54:02.739370   22893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 17:54:02.753830   22893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36673
	I0229 17:54:02.754389   22893 main.go:141] libmachine: () Calling .GetVersion
	I0229 17:54:02.754927   22893 main.go:141] libmachine: Using API Version  1
	I0229 17:54:02.754948   22893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 17:54:02.755283   22893 main.go:141] libmachine: () Calling .GetMachineName
	I0229 17:54:02.755477   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetMachineName
	I0229 17:54:02.755643   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .DriverName
	I0229 17:54:02.755795   22893 start.go:159] libmachine.API.Create for "ingress-addon-legacy-779504" (driver="kvm2")
	I0229 17:54:02.755825   22893 client.go:168] LocalClient.Create starting
	I0229 17:54:02.755862   22893 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem
	I0229 17:54:02.755899   22893 main.go:141] libmachine: Decoding PEM data...
	I0229 17:54:02.755929   22893 main.go:141] libmachine: Parsing certificate...
	I0229 17:54:02.756014   22893 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem
	I0229 17:54:02.756046   22893 main.go:141] libmachine: Decoding PEM data...
	I0229 17:54:02.756064   22893 main.go:141] libmachine: Parsing certificate...
	I0229 17:54:02.756091   22893 main.go:141] libmachine: Running pre-create checks...
	I0229 17:54:02.756105   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .PreCreateCheck
	I0229 17:54:02.756397   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetConfigRaw
	I0229 17:54:02.756780   22893 main.go:141] libmachine: Creating machine...
	I0229 17:54:02.756794   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .Create
	I0229 17:54:02.756927   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Creating KVM machine...
	I0229 17:54:02.758423   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | found existing default KVM network
	I0229 17:54:02.759146   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | I0229 17:54:02.759001   22960 network.go:207] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1c0}
	I0229 17:54:02.764240   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | trying to create private KVM network mk-ingress-addon-legacy-779504 192.168.39.0/24...
	I0229 17:54:02.825601   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | private KVM network mk-ingress-addon-legacy-779504 192.168.39.0/24 created
	I0229 17:54:02.825645   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | I0229 17:54:02.825517   22960 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18259-6428/.minikube
	I0229 17:54:02.825660   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Setting up store path in /home/jenkins/minikube-integration/18259-6428/.minikube/machines/ingress-addon-legacy-779504 ...
	I0229 17:54:02.825682   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Building disk image from file:///home/jenkins/minikube-integration/18259-6428/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso
	I0229 17:54:02.825704   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Downloading /home/jenkins/minikube-integration/18259-6428/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18259-6428/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0229 17:54:03.040204   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | I0229 17:54:03.040050   22960 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/ingress-addon-legacy-779504/id_rsa...
	I0229 17:54:03.139240   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | I0229 17:54:03.139127   22960 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/ingress-addon-legacy-779504/ingress-addon-legacy-779504.rawdisk...
	I0229 17:54:03.139278   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | Writing magic tar header
	I0229 17:54:03.139299   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | Writing SSH key tar header
	I0229 17:54:03.139313   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | I0229 17:54:03.139238   22960 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18259-6428/.minikube/machines/ingress-addon-legacy-779504 ...
	I0229 17:54:03.139352   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/ingress-addon-legacy-779504
	I0229 17:54:03.139370   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6428/.minikube/machines
	I0229 17:54:03.139384   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Setting executable bit set on /home/jenkins/minikube-integration/18259-6428/.minikube/machines/ingress-addon-legacy-779504 (perms=drwx------)
	I0229 17:54:03.139402   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Setting executable bit set on /home/jenkins/minikube-integration/18259-6428/.minikube/machines (perms=drwxr-xr-x)
	I0229 17:54:03.139410   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Setting executable bit set on /home/jenkins/minikube-integration/18259-6428/.minikube (perms=drwxr-xr-x)
	I0229 17:54:03.139417   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Setting executable bit set on /home/jenkins/minikube-integration/18259-6428 (perms=drwxrwxr-x)
	I0229 17:54:03.139424   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0229 17:54:03.139437   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0229 17:54:03.139452   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6428/.minikube
	I0229 17:54:03.139465   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Creating domain...
	I0229 17:54:03.139485   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6428
	I0229 17:54:03.139495   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0229 17:54:03.139502   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | Checking permissions on dir: /home/jenkins
	I0229 17:54:03.139510   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | Checking permissions on dir: /home
	I0229 17:54:03.139538   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | Skipping /home - not owner
	I0229 17:54:03.140543   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) define libvirt domain using xml: 
	I0229 17:54:03.140562   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) <domain type='kvm'>
	I0229 17:54:03.140569   22893 main.go:141] libmachine: (ingress-addon-legacy-779504)   <name>ingress-addon-legacy-779504</name>
	I0229 17:54:03.140575   22893 main.go:141] libmachine: (ingress-addon-legacy-779504)   <memory unit='MiB'>4096</memory>
	I0229 17:54:03.140587   22893 main.go:141] libmachine: (ingress-addon-legacy-779504)   <vcpu>2</vcpu>
	I0229 17:54:03.140592   22893 main.go:141] libmachine: (ingress-addon-legacy-779504)   <features>
	I0229 17:54:03.140598   22893 main.go:141] libmachine: (ingress-addon-legacy-779504)     <acpi/>
	I0229 17:54:03.140602   22893 main.go:141] libmachine: (ingress-addon-legacy-779504)     <apic/>
	I0229 17:54:03.140610   22893 main.go:141] libmachine: (ingress-addon-legacy-779504)     <pae/>
	I0229 17:54:03.140614   22893 main.go:141] libmachine: (ingress-addon-legacy-779504)     
	I0229 17:54:03.140622   22893 main.go:141] libmachine: (ingress-addon-legacy-779504)   </features>
	I0229 17:54:03.140631   22893 main.go:141] libmachine: (ingress-addon-legacy-779504)   <cpu mode='host-passthrough'>
	I0229 17:54:03.140639   22893 main.go:141] libmachine: (ingress-addon-legacy-779504)   
	I0229 17:54:03.140657   22893 main.go:141] libmachine: (ingress-addon-legacy-779504)   </cpu>
	I0229 17:54:03.140669   22893 main.go:141] libmachine: (ingress-addon-legacy-779504)   <os>
	I0229 17:54:03.140679   22893 main.go:141] libmachine: (ingress-addon-legacy-779504)     <type>hvm</type>
	I0229 17:54:03.140688   22893 main.go:141] libmachine: (ingress-addon-legacy-779504)     <boot dev='cdrom'/>
	I0229 17:54:03.140693   22893 main.go:141] libmachine: (ingress-addon-legacy-779504)     <boot dev='hd'/>
	I0229 17:54:03.140699   22893 main.go:141] libmachine: (ingress-addon-legacy-779504)     <bootmenu enable='no'/>
	I0229 17:54:03.140703   22893 main.go:141] libmachine: (ingress-addon-legacy-779504)   </os>
	I0229 17:54:03.140709   22893 main.go:141] libmachine: (ingress-addon-legacy-779504)   <devices>
	I0229 17:54:03.140714   22893 main.go:141] libmachine: (ingress-addon-legacy-779504)     <disk type='file' device='cdrom'>
	I0229 17:54:03.140731   22893 main.go:141] libmachine: (ingress-addon-legacy-779504)       <source file='/home/jenkins/minikube-integration/18259-6428/.minikube/machines/ingress-addon-legacy-779504/boot2docker.iso'/>
	I0229 17:54:03.140749   22893 main.go:141] libmachine: (ingress-addon-legacy-779504)       <target dev='hdc' bus='scsi'/>
	I0229 17:54:03.140762   22893 main.go:141] libmachine: (ingress-addon-legacy-779504)       <readonly/>
	I0229 17:54:03.140778   22893 main.go:141] libmachine: (ingress-addon-legacy-779504)     </disk>
	I0229 17:54:03.140787   22893 main.go:141] libmachine: (ingress-addon-legacy-779504)     <disk type='file' device='disk'>
	I0229 17:54:03.140793   22893 main.go:141] libmachine: (ingress-addon-legacy-779504)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0229 17:54:03.140807   22893 main.go:141] libmachine: (ingress-addon-legacy-779504)       <source file='/home/jenkins/minikube-integration/18259-6428/.minikube/machines/ingress-addon-legacy-779504/ingress-addon-legacy-779504.rawdisk'/>
	I0229 17:54:03.140819   22893 main.go:141] libmachine: (ingress-addon-legacy-779504)       <target dev='hda' bus='virtio'/>
	I0229 17:54:03.140840   22893 main.go:141] libmachine: (ingress-addon-legacy-779504)     </disk>
	I0229 17:54:03.140853   22893 main.go:141] libmachine: (ingress-addon-legacy-779504)     <interface type='network'>
	I0229 17:54:03.140865   22893 main.go:141] libmachine: (ingress-addon-legacy-779504)       <source network='mk-ingress-addon-legacy-779504'/>
	I0229 17:54:03.140877   22893 main.go:141] libmachine: (ingress-addon-legacy-779504)       <model type='virtio'/>
	I0229 17:54:03.140888   22893 main.go:141] libmachine: (ingress-addon-legacy-779504)     </interface>
	I0229 17:54:03.140901   22893 main.go:141] libmachine: (ingress-addon-legacy-779504)     <interface type='network'>
	I0229 17:54:03.140912   22893 main.go:141] libmachine: (ingress-addon-legacy-779504)       <source network='default'/>
	I0229 17:54:03.140936   22893 main.go:141] libmachine: (ingress-addon-legacy-779504)       <model type='virtio'/>
	I0229 17:54:03.140954   22893 main.go:141] libmachine: (ingress-addon-legacy-779504)     </interface>
	I0229 17:54:03.140977   22893 main.go:141] libmachine: (ingress-addon-legacy-779504)     <serial type='pty'>
	I0229 17:54:03.141002   22893 main.go:141] libmachine: (ingress-addon-legacy-779504)       <target port='0'/>
	I0229 17:54:03.141015   22893 main.go:141] libmachine: (ingress-addon-legacy-779504)     </serial>
	I0229 17:54:03.141029   22893 main.go:141] libmachine: (ingress-addon-legacy-779504)     <console type='pty'>
	I0229 17:54:03.141043   22893 main.go:141] libmachine: (ingress-addon-legacy-779504)       <target type='serial' port='0'/>
	I0229 17:54:03.141054   22893 main.go:141] libmachine: (ingress-addon-legacy-779504)     </console>
	I0229 17:54:03.141074   22893 main.go:141] libmachine: (ingress-addon-legacy-779504)     <rng model='virtio'>
	I0229 17:54:03.141095   22893 main.go:141] libmachine: (ingress-addon-legacy-779504)       <backend model='random'>/dev/random</backend>
	I0229 17:54:03.141109   22893 main.go:141] libmachine: (ingress-addon-legacy-779504)     </rng>
	I0229 17:54:03.141120   22893 main.go:141] libmachine: (ingress-addon-legacy-779504)     
	I0229 17:54:03.141130   22893 main.go:141] libmachine: (ingress-addon-legacy-779504)     
	I0229 17:54:03.141141   22893 main.go:141] libmachine: (ingress-addon-legacy-779504)   </devices>
	I0229 17:54:03.141152   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) </domain>
	I0229 17:54:03.141166   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) 
	I0229 17:54:03.145085   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | domain ingress-addon-legacy-779504 has defined MAC address 52:54:00:e9:72:a8 in network default
	I0229 17:54:03.145604   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Ensuring networks are active...
	I0229 17:54:03.145625   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | domain ingress-addon-legacy-779504 has defined MAC address 52:54:00:1b:79:48 in network mk-ingress-addon-legacy-779504
	I0229 17:54:03.146215   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Ensuring network default is active
	I0229 17:54:03.146541   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Ensuring network mk-ingress-addon-legacy-779504 is active
	I0229 17:54:03.147039   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Getting domain xml...
	I0229 17:54:03.147712   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Creating domain...
	I0229 17:54:04.310778   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Waiting to get IP...
	I0229 17:54:04.311586   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | domain ingress-addon-legacy-779504 has defined MAC address 52:54:00:1b:79:48 in network mk-ingress-addon-legacy-779504
	I0229 17:54:04.311972   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | unable to find current IP address of domain ingress-addon-legacy-779504 in network mk-ingress-addon-legacy-779504
	I0229 17:54:04.311998   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | I0229 17:54:04.311937   22960 retry.go:31] will retry after 312.246482ms: waiting for machine to come up
	I0229 17:54:04.625295   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | domain ingress-addon-legacy-779504 has defined MAC address 52:54:00:1b:79:48 in network mk-ingress-addon-legacy-779504
	I0229 17:54:04.625765   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | unable to find current IP address of domain ingress-addon-legacy-779504 in network mk-ingress-addon-legacy-779504
	I0229 17:54:04.625794   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | I0229 17:54:04.625715   22960 retry.go:31] will retry after 281.842125ms: waiting for machine to come up
	I0229 17:54:04.909152   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | domain ingress-addon-legacy-779504 has defined MAC address 52:54:00:1b:79:48 in network mk-ingress-addon-legacy-779504
	I0229 17:54:04.909516   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | unable to find current IP address of domain ingress-addon-legacy-779504 in network mk-ingress-addon-legacy-779504
	I0229 17:54:04.909542   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | I0229 17:54:04.909477   22960 retry.go:31] will retry after 405.085769ms: waiting for machine to come up
	I0229 17:54:05.315702   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | domain ingress-addon-legacy-779504 has defined MAC address 52:54:00:1b:79:48 in network mk-ingress-addon-legacy-779504
	I0229 17:54:05.316102   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | unable to find current IP address of domain ingress-addon-legacy-779504 in network mk-ingress-addon-legacy-779504
	I0229 17:54:05.316116   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | I0229 17:54:05.316078   22960 retry.go:31] will retry after 475.520073ms: waiting for machine to come up
	I0229 17:54:05.792612   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | domain ingress-addon-legacy-779504 has defined MAC address 52:54:00:1b:79:48 in network mk-ingress-addon-legacy-779504
	I0229 17:54:05.793167   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | unable to find current IP address of domain ingress-addon-legacy-779504 in network mk-ingress-addon-legacy-779504
	I0229 17:54:05.793187   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | I0229 17:54:05.793122   22960 retry.go:31] will retry after 749.137471ms: waiting for machine to come up
	I0229 17:54:06.543481   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | domain ingress-addon-legacy-779504 has defined MAC address 52:54:00:1b:79:48 in network mk-ingress-addon-legacy-779504
	I0229 17:54:06.543928   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | unable to find current IP address of domain ingress-addon-legacy-779504 in network mk-ingress-addon-legacy-779504
	I0229 17:54:06.543952   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | I0229 17:54:06.543861   22960 retry.go:31] will retry after 917.443362ms: waiting for machine to come up
	I0229 17:54:07.462942   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | domain ingress-addon-legacy-779504 has defined MAC address 52:54:00:1b:79:48 in network mk-ingress-addon-legacy-779504
	I0229 17:54:07.463330   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | unable to find current IP address of domain ingress-addon-legacy-779504 in network mk-ingress-addon-legacy-779504
	I0229 17:54:07.463358   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | I0229 17:54:07.463288   22960 retry.go:31] will retry after 1.084654233s: waiting for machine to come up
	I0229 17:54:08.549925   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | domain ingress-addon-legacy-779504 has defined MAC address 52:54:00:1b:79:48 in network mk-ingress-addon-legacy-779504
	I0229 17:54:08.550380   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | unable to find current IP address of domain ingress-addon-legacy-779504 in network mk-ingress-addon-legacy-779504
	I0229 17:54:08.550408   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | I0229 17:54:08.550323   22960 retry.go:31] will retry after 1.181338628s: waiting for machine to come up
	I0229 17:54:09.732738   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | domain ingress-addon-legacy-779504 has defined MAC address 52:54:00:1b:79:48 in network mk-ingress-addon-legacy-779504
	I0229 17:54:09.733142   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | unable to find current IP address of domain ingress-addon-legacy-779504 in network mk-ingress-addon-legacy-779504
	I0229 17:54:09.733169   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | I0229 17:54:09.733108   22960 retry.go:31] will retry after 1.474032335s: waiting for machine to come up
	I0229 17:54:11.209674   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | domain ingress-addon-legacy-779504 has defined MAC address 52:54:00:1b:79:48 in network mk-ingress-addon-legacy-779504
	I0229 17:54:11.210054   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | unable to find current IP address of domain ingress-addon-legacy-779504 in network mk-ingress-addon-legacy-779504
	I0229 17:54:11.210080   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | I0229 17:54:11.210020   22960 retry.go:31] will retry after 2.11789974s: waiting for machine to come up
	I0229 17:54:13.331435   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | domain ingress-addon-legacy-779504 has defined MAC address 52:54:00:1b:79:48 in network mk-ingress-addon-legacy-779504
	I0229 17:54:13.332030   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | unable to find current IP address of domain ingress-addon-legacy-779504 in network mk-ingress-addon-legacy-779504
	I0229 17:54:13.332060   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | I0229 17:54:13.331976   22960 retry.go:31] will retry after 1.939355909s: waiting for machine to come up
	I0229 17:54:15.272595   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | domain ingress-addon-legacy-779504 has defined MAC address 52:54:00:1b:79:48 in network mk-ingress-addon-legacy-779504
	I0229 17:54:15.273000   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | unable to find current IP address of domain ingress-addon-legacy-779504 in network mk-ingress-addon-legacy-779504
	I0229 17:54:15.273032   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | I0229 17:54:15.272950   22960 retry.go:31] will retry after 3.47846528s: waiting for machine to come up
	I0229 17:54:18.753344   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | domain ingress-addon-legacy-779504 has defined MAC address 52:54:00:1b:79:48 in network mk-ingress-addon-legacy-779504
	I0229 17:54:18.753707   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | unable to find current IP address of domain ingress-addon-legacy-779504 in network mk-ingress-addon-legacy-779504
	I0229 17:54:18.753730   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | I0229 17:54:18.753678   22960 retry.go:31] will retry after 2.924470736s: waiting for machine to come up
	I0229 17:54:21.680075   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | domain ingress-addon-legacy-779504 has defined MAC address 52:54:00:1b:79:48 in network mk-ingress-addon-legacy-779504
	I0229 17:54:21.680409   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | unable to find current IP address of domain ingress-addon-legacy-779504 in network mk-ingress-addon-legacy-779504
	I0229 17:54:21.680434   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | I0229 17:54:21.680372   22960 retry.go:31] will retry after 3.715514764s: waiting for machine to come up
	I0229 17:54:25.399398   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | domain ingress-addon-legacy-779504 has defined MAC address 52:54:00:1b:79:48 in network mk-ingress-addon-legacy-779504
	I0229 17:54:25.399802   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Found IP for machine: 192.168.39.104
	I0229 17:54:25.399834   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | domain ingress-addon-legacy-779504 has current primary IP address 192.168.39.104 and MAC address 52:54:00:1b:79:48 in network mk-ingress-addon-legacy-779504
	I0229 17:54:25.399845   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Reserving static IP address...
	I0229 17:54:25.400169   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | unable to find host DHCP lease matching {name: "ingress-addon-legacy-779504", mac: "52:54:00:1b:79:48", ip: "192.168.39.104"} in network mk-ingress-addon-legacy-779504
	I0229 17:54:25.470583   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | Getting to WaitForSSH function...
	I0229 17:54:25.470618   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Reserved static IP address: 192.168.39.104
	I0229 17:54:25.470632   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Waiting for SSH to be available...
	I0229 17:54:25.473227   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | domain ingress-addon-legacy-779504 has defined MAC address 52:54:00:1b:79:48 in network mk-ingress-addon-legacy-779504
	I0229 17:54:25.473585   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:79:48", ip: ""} in network mk-ingress-addon-legacy-779504: {Iface:virbr1 ExpiryTime:2024-02-29 18:54:18 +0000 UTC Type:0 Mac:52:54:00:1b:79:48 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:minikube Clientid:01:52:54:00:1b:79:48}
	I0229 17:54:25.473618   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | domain ingress-addon-legacy-779504 has defined IP address 192.168.39.104 and MAC address 52:54:00:1b:79:48 in network mk-ingress-addon-legacy-779504
	I0229 17:54:25.473767   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | Using SSH client type: external
	I0229 17:54:25.473792   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | Using SSH private key: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/ingress-addon-legacy-779504/id_rsa (-rw-------)
	I0229 17:54:25.473867   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.104 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18259-6428/.minikube/machines/ingress-addon-legacy-779504/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 17:54:25.473909   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | About to run SSH command:
	I0229 17:54:25.473928   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | exit 0
	I0229 17:54:25.599400   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | SSH cmd err, output: <nil>: 
	I0229 17:54:25.599643   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) KVM machine creation complete!
	I0229 17:54:25.599943   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetConfigRaw
	I0229 17:54:25.600461   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .DriverName
	I0229 17:54:25.600732   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .DriverName
	I0229 17:54:25.600895   22893 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0229 17:54:25.600923   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetState
	I0229 17:54:25.602190   22893 main.go:141] libmachine: Detecting operating system of created instance...
	I0229 17:54:25.602202   22893 main.go:141] libmachine: Waiting for SSH to be available...
	I0229 17:54:25.602208   22893 main.go:141] libmachine: Getting to WaitForSSH function...
	I0229 17:54:25.602214   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetSSHHostname
	I0229 17:54:25.604399   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | domain ingress-addon-legacy-779504 has defined MAC address 52:54:00:1b:79:48 in network mk-ingress-addon-legacy-779504
	I0229 17:54:25.604804   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:79:48", ip: ""} in network mk-ingress-addon-legacy-779504: {Iface:virbr1 ExpiryTime:2024-02-29 18:54:18 +0000 UTC Type:0 Mac:52:54:00:1b:79:48 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ingress-addon-legacy-779504 Clientid:01:52:54:00:1b:79:48}
	I0229 17:54:25.604831   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | domain ingress-addon-legacy-779504 has defined IP address 192.168.39.104 and MAC address 52:54:00:1b:79:48 in network mk-ingress-addon-legacy-779504
	I0229 17:54:25.604929   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetSSHPort
	I0229 17:54:25.605074   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetSSHKeyPath
	I0229 17:54:25.605226   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetSSHKeyPath
	I0229 17:54:25.605336   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetSSHUsername
	I0229 17:54:25.605485   22893 main.go:141] libmachine: Using SSH client type: native
	I0229 17:54:25.605687   22893 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0229 17:54:25.605700   22893 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0229 17:54:25.714759   22893 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 17:54:25.714780   22893 main.go:141] libmachine: Detecting the provisioner...
	I0229 17:54:25.714791   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetSSHHostname
	I0229 17:54:25.717471   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | domain ingress-addon-legacy-779504 has defined MAC address 52:54:00:1b:79:48 in network mk-ingress-addon-legacy-779504
	I0229 17:54:25.717843   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:79:48", ip: ""} in network mk-ingress-addon-legacy-779504: {Iface:virbr1 ExpiryTime:2024-02-29 18:54:18 +0000 UTC Type:0 Mac:52:54:00:1b:79:48 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ingress-addon-legacy-779504 Clientid:01:52:54:00:1b:79:48}
	I0229 17:54:25.717876   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | domain ingress-addon-legacy-779504 has defined IP address 192.168.39.104 and MAC address 52:54:00:1b:79:48 in network mk-ingress-addon-legacy-779504
	I0229 17:54:25.717998   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetSSHPort
	I0229 17:54:25.718165   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetSSHKeyPath
	I0229 17:54:25.718307   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetSSHKeyPath
	I0229 17:54:25.718412   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetSSHUsername
	I0229 17:54:25.718521   22893 main.go:141] libmachine: Using SSH client type: native
	I0229 17:54:25.718678   22893 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0229 17:54:25.718688   22893 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0229 17:54:25.828056   22893 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0229 17:54:25.828150   22893 main.go:141] libmachine: found compatible host: buildroot
	I0229 17:54:25.828166   22893 main.go:141] libmachine: Provisioning with buildroot...
	I0229 17:54:25.828179   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetMachineName
	I0229 17:54:25.828413   22893 buildroot.go:166] provisioning hostname "ingress-addon-legacy-779504"
	I0229 17:54:25.828435   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetMachineName
	I0229 17:54:25.828668   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetSSHHostname
	I0229 17:54:25.830890   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | domain ingress-addon-legacy-779504 has defined MAC address 52:54:00:1b:79:48 in network mk-ingress-addon-legacy-779504
	I0229 17:54:25.831287   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:79:48", ip: ""} in network mk-ingress-addon-legacy-779504: {Iface:virbr1 ExpiryTime:2024-02-29 18:54:18 +0000 UTC Type:0 Mac:52:54:00:1b:79:48 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ingress-addon-legacy-779504 Clientid:01:52:54:00:1b:79:48}
	I0229 17:54:25.831319   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | domain ingress-addon-legacy-779504 has defined IP address 192.168.39.104 and MAC address 52:54:00:1b:79:48 in network mk-ingress-addon-legacy-779504
	I0229 17:54:25.831467   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetSSHPort
	I0229 17:54:25.831620   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetSSHKeyPath
	I0229 17:54:25.831772   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetSSHKeyPath
	I0229 17:54:25.831895   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetSSHUsername
	I0229 17:54:25.832050   22893 main.go:141] libmachine: Using SSH client type: native
	I0229 17:54:25.832214   22893 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0229 17:54:25.832227   22893 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-779504 && echo "ingress-addon-legacy-779504" | sudo tee /etc/hostname
	I0229 17:54:25.960587   22893 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-779504
	
	I0229 17:54:25.960613   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetSSHHostname
	I0229 17:54:25.963413   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | domain ingress-addon-legacy-779504 has defined MAC address 52:54:00:1b:79:48 in network mk-ingress-addon-legacy-779504
	I0229 17:54:25.963760   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:79:48", ip: ""} in network mk-ingress-addon-legacy-779504: {Iface:virbr1 ExpiryTime:2024-02-29 18:54:18 +0000 UTC Type:0 Mac:52:54:00:1b:79:48 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ingress-addon-legacy-779504 Clientid:01:52:54:00:1b:79:48}
	I0229 17:54:25.963789   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | domain ingress-addon-legacy-779504 has defined IP address 192.168.39.104 and MAC address 52:54:00:1b:79:48 in network mk-ingress-addon-legacy-779504
	I0229 17:54:25.963944   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetSSHPort
	I0229 17:54:25.964141   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetSSHKeyPath
	I0229 17:54:25.964301   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetSSHKeyPath
	I0229 17:54:25.964416   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetSSHUsername
	I0229 17:54:25.964560   22893 main.go:141] libmachine: Using SSH client type: native
	I0229 17:54:25.964728   22893 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0229 17:54:25.964744   22893 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-779504' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-779504/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-779504' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 17:54:26.085075   22893 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 17:54:26.085107   22893 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18259-6428/.minikube CaCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18259-6428/.minikube}
	I0229 17:54:26.085124   22893 buildroot.go:174] setting up certificates
	I0229 17:54:26.085132   22893 provision.go:83] configureAuth start
	I0229 17:54:26.085140   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetMachineName
	I0229 17:54:26.085416   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetIP
	I0229 17:54:26.087964   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | domain ingress-addon-legacy-779504 has defined MAC address 52:54:00:1b:79:48 in network mk-ingress-addon-legacy-779504
	I0229 17:54:26.088261   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:79:48", ip: ""} in network mk-ingress-addon-legacy-779504: {Iface:virbr1 ExpiryTime:2024-02-29 18:54:18 +0000 UTC Type:0 Mac:52:54:00:1b:79:48 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ingress-addon-legacy-779504 Clientid:01:52:54:00:1b:79:48}
	I0229 17:54:26.088294   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | domain ingress-addon-legacy-779504 has defined IP address 192.168.39.104 and MAC address 52:54:00:1b:79:48 in network mk-ingress-addon-legacy-779504
	I0229 17:54:26.088395   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetSSHHostname
	I0229 17:54:26.090280   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | domain ingress-addon-legacy-779504 has defined MAC address 52:54:00:1b:79:48 in network mk-ingress-addon-legacy-779504
	I0229 17:54:26.090634   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:79:48", ip: ""} in network mk-ingress-addon-legacy-779504: {Iface:virbr1 ExpiryTime:2024-02-29 18:54:18 +0000 UTC Type:0 Mac:52:54:00:1b:79:48 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ingress-addon-legacy-779504 Clientid:01:52:54:00:1b:79:48}
	I0229 17:54:26.090665   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | domain ingress-addon-legacy-779504 has defined IP address 192.168.39.104 and MAC address 52:54:00:1b:79:48 in network mk-ingress-addon-legacy-779504
	I0229 17:54:26.090781   22893 provision.go:138] copyHostCerts
	I0229 17:54:26.090820   22893 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem
	I0229 17:54:26.090850   22893 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem, removing ...
	I0229 17:54:26.090866   22893 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem
	I0229 17:54:26.090932   22893 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem (1082 bytes)
	I0229 17:54:26.091054   22893 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem
	I0229 17:54:26.091078   22893 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem, removing ...
	I0229 17:54:26.091085   22893 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem
	I0229 17:54:26.091117   22893 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem (1123 bytes)
	I0229 17:54:26.091166   22893 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem
	I0229 17:54:26.091182   22893 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem, removing ...
	I0229 17:54:26.091186   22893 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem
	I0229 17:54:26.091206   22893 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem (1675 bytes)
	I0229 17:54:26.091260   22893 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-779504 san=[192.168.39.104 192.168.39.104 localhost 127.0.0.1 minikube ingress-addon-legacy-779504]
	I0229 17:54:26.433613   22893 provision.go:172] copyRemoteCerts
	I0229 17:54:26.433664   22893 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 17:54:26.433689   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetSSHHostname
	I0229 17:54:26.436320   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | domain ingress-addon-legacy-779504 has defined MAC address 52:54:00:1b:79:48 in network mk-ingress-addon-legacy-779504
	I0229 17:54:26.436636   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:79:48", ip: ""} in network mk-ingress-addon-legacy-779504: {Iface:virbr1 ExpiryTime:2024-02-29 18:54:18 +0000 UTC Type:0 Mac:52:54:00:1b:79:48 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ingress-addon-legacy-779504 Clientid:01:52:54:00:1b:79:48}
	I0229 17:54:26.436665   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | domain ingress-addon-legacy-779504 has defined IP address 192.168.39.104 and MAC address 52:54:00:1b:79:48 in network mk-ingress-addon-legacy-779504
	I0229 17:54:26.436801   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetSSHPort
	I0229 17:54:26.437007   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetSSHKeyPath
	I0229 17:54:26.437163   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetSSHUsername
	I0229 17:54:26.437328   22893 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/ingress-addon-legacy-779504/id_rsa Username:docker}
	I0229 17:54:26.522894   22893 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0229 17:54:26.522954   22893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 17:54:26.549402   22893 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0229 17:54:26.549478   22893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0229 17:54:26.575079   22893 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0229 17:54:26.575153   22893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 17:54:26.600020   22893 provision.go:86] duration metric: configureAuth took 514.877178ms
	I0229 17:54:26.600048   22893 buildroot.go:189] setting minikube options for container-runtime
	I0229 17:54:26.600236   22893 config.go:182] Loaded profile config "ingress-addon-legacy-779504": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0229 17:54:26.600302   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetSSHHostname
	I0229 17:54:26.602731   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | domain ingress-addon-legacy-779504 has defined MAC address 52:54:00:1b:79:48 in network mk-ingress-addon-legacy-779504
	I0229 17:54:26.603059   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:79:48", ip: ""} in network mk-ingress-addon-legacy-779504: {Iface:virbr1 ExpiryTime:2024-02-29 18:54:18 +0000 UTC Type:0 Mac:52:54:00:1b:79:48 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ingress-addon-legacy-779504 Clientid:01:52:54:00:1b:79:48}
	I0229 17:54:26.603087   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | domain ingress-addon-legacy-779504 has defined IP address 192.168.39.104 and MAC address 52:54:00:1b:79:48 in network mk-ingress-addon-legacy-779504
	I0229 17:54:26.603243   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetSSHPort
	I0229 17:54:26.603414   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetSSHKeyPath
	I0229 17:54:26.603567   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetSSHKeyPath
	I0229 17:54:26.603705   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetSSHUsername
	I0229 17:54:26.603848   22893 main.go:141] libmachine: Using SSH client type: native
	I0229 17:54:26.603997   22893 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0229 17:54:26.604010   22893 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0229 17:54:26.889612   22893 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0229 17:54:26.889643   22893 main.go:141] libmachine: Checking connection to Docker...
	I0229 17:54:26.889654   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetURL
	I0229 17:54:26.891038   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | Using libvirt version 6000000
	I0229 17:54:26.892900   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | domain ingress-addon-legacy-779504 has defined MAC address 52:54:00:1b:79:48 in network mk-ingress-addon-legacy-779504
	I0229 17:54:26.893170   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:79:48", ip: ""} in network mk-ingress-addon-legacy-779504: {Iface:virbr1 ExpiryTime:2024-02-29 18:54:18 +0000 UTC Type:0 Mac:52:54:00:1b:79:48 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ingress-addon-legacy-779504 Clientid:01:52:54:00:1b:79:48}
	I0229 17:54:26.893197   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | domain ingress-addon-legacy-779504 has defined IP address 192.168.39.104 and MAC address 52:54:00:1b:79:48 in network mk-ingress-addon-legacy-779504
	I0229 17:54:26.893338   22893 main.go:141] libmachine: Docker is up and running!
	I0229 17:54:26.893352   22893 main.go:141] libmachine: Reticulating splines...
	I0229 17:54:26.893359   22893 client.go:171] LocalClient.Create took 24.137523616s
	I0229 17:54:26.893385   22893 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-779504" took 24.137591881s
	I0229 17:54:26.893414   22893 start.go:300] post-start starting for "ingress-addon-legacy-779504" (driver="kvm2")
	I0229 17:54:26.893432   22893 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 17:54:26.893455   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .DriverName
	I0229 17:54:26.893689   22893 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 17:54:26.893709   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetSSHHostname
	I0229 17:54:26.895550   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | domain ingress-addon-legacy-779504 has defined MAC address 52:54:00:1b:79:48 in network mk-ingress-addon-legacy-779504
	I0229 17:54:26.895858   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:79:48", ip: ""} in network mk-ingress-addon-legacy-779504: {Iface:virbr1 ExpiryTime:2024-02-29 18:54:18 +0000 UTC Type:0 Mac:52:54:00:1b:79:48 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ingress-addon-legacy-779504 Clientid:01:52:54:00:1b:79:48}
	I0229 17:54:26.895886   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | domain ingress-addon-legacy-779504 has defined IP address 192.168.39.104 and MAC address 52:54:00:1b:79:48 in network mk-ingress-addon-legacy-779504
	I0229 17:54:26.895986   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetSSHPort
	I0229 17:54:26.896165   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetSSHKeyPath
	I0229 17:54:26.896320   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetSSHUsername
	I0229 17:54:26.896456   22893 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/ingress-addon-legacy-779504/id_rsa Username:docker}
	I0229 17:54:26.983489   22893 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 17:54:26.988105   22893 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 17:54:26.988125   22893 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6428/.minikube/addons for local assets ...
	I0229 17:54:26.988197   22893 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6428/.minikube/files for local assets ...
	I0229 17:54:26.988280   22893 filesync.go:149] local asset: /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem -> 136512.pem in /etc/ssl/certs
	I0229 17:54:26.988296   22893 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem -> /etc/ssl/certs/136512.pem
	I0229 17:54:26.988398   22893 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 17:54:26.999340   22893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem --> /etc/ssl/certs/136512.pem (1708 bytes)
	I0229 17:54:27.024675   22893 start.go:303] post-start completed in 131.24504ms
	I0229 17:54:27.024719   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetConfigRaw
	I0229 17:54:27.025221   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetIP
	I0229 17:54:27.027653   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | domain ingress-addon-legacy-779504 has defined MAC address 52:54:00:1b:79:48 in network mk-ingress-addon-legacy-779504
	I0229 17:54:27.027971   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:79:48", ip: ""} in network mk-ingress-addon-legacy-779504: {Iface:virbr1 ExpiryTime:2024-02-29 18:54:18 +0000 UTC Type:0 Mac:52:54:00:1b:79:48 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ingress-addon-legacy-779504 Clientid:01:52:54:00:1b:79:48}
	I0229 17:54:27.028004   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | domain ingress-addon-legacy-779504 has defined IP address 192.168.39.104 and MAC address 52:54:00:1b:79:48 in network mk-ingress-addon-legacy-779504
	I0229 17:54:27.028234   22893 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/ingress-addon-legacy-779504/config.json ...
	I0229 17:54:27.028411   22893 start.go:128] duration metric: createHost completed in 24.292328728s
	I0229 17:54:27.028433   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetSSHHostname
	I0229 17:54:27.030703   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | domain ingress-addon-legacy-779504 has defined MAC address 52:54:00:1b:79:48 in network mk-ingress-addon-legacy-779504
	I0229 17:54:27.031053   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:79:48", ip: ""} in network mk-ingress-addon-legacy-779504: {Iface:virbr1 ExpiryTime:2024-02-29 18:54:18 +0000 UTC Type:0 Mac:52:54:00:1b:79:48 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ingress-addon-legacy-779504 Clientid:01:52:54:00:1b:79:48}
	I0229 17:54:27.031085   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | domain ingress-addon-legacy-779504 has defined IP address 192.168.39.104 and MAC address 52:54:00:1b:79:48 in network mk-ingress-addon-legacy-779504
	I0229 17:54:27.031220   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetSSHPort
	I0229 17:54:27.031394   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetSSHKeyPath
	I0229 17:54:27.031536   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetSSHKeyPath
	I0229 17:54:27.031685   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetSSHUsername
	I0229 17:54:27.031888   22893 main.go:141] libmachine: Using SSH client type: native
	I0229 17:54:27.032040   22893 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0229 17:54:27.032049   22893 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0229 17:54:27.144663   22893 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709229267.115383420
	
	I0229 17:54:27.144686   22893 fix.go:206] guest clock: 1709229267.115383420
	I0229 17:54:27.144695   22893 fix.go:219] Guest: 2024-02-29 17:54:27.11538342 +0000 UTC Remote: 2024-02-29 17:54:27.028422755 +0000 UTC m=+42.249997771 (delta=86.960665ms)
	I0229 17:54:27.144719   22893 fix.go:190] guest clock delta is within tolerance: 86.960665ms
	I0229 17:54:27.144726   22893 start.go:83] releasing machines lock for "ingress-addon-legacy-779504", held for 24.408749284s
	I0229 17:54:27.144760   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .DriverName
	I0229 17:54:27.145008   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetIP
	I0229 17:54:27.147610   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | domain ingress-addon-legacy-779504 has defined MAC address 52:54:00:1b:79:48 in network mk-ingress-addon-legacy-779504
	I0229 17:54:27.147933   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:79:48", ip: ""} in network mk-ingress-addon-legacy-779504: {Iface:virbr1 ExpiryTime:2024-02-29 18:54:18 +0000 UTC Type:0 Mac:52:54:00:1b:79:48 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ingress-addon-legacy-779504 Clientid:01:52:54:00:1b:79:48}
	I0229 17:54:27.147971   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | domain ingress-addon-legacy-779504 has defined IP address 192.168.39.104 and MAC address 52:54:00:1b:79:48 in network mk-ingress-addon-legacy-779504
	I0229 17:54:27.148106   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .DriverName
	I0229 17:54:27.148595   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .DriverName
	I0229 17:54:27.148749   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .DriverName
	I0229 17:54:27.148826   22893 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 17:54:27.148862   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetSSHHostname
	I0229 17:54:27.148964   22893 ssh_runner.go:195] Run: cat /version.json
	I0229 17:54:27.148987   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetSSHHostname
	I0229 17:54:27.151361   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | domain ingress-addon-legacy-779504 has defined MAC address 52:54:00:1b:79:48 in network mk-ingress-addon-legacy-779504
	I0229 17:54:27.151673   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | domain ingress-addon-legacy-779504 has defined MAC address 52:54:00:1b:79:48 in network mk-ingress-addon-legacy-779504
	I0229 17:54:27.151806   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:79:48", ip: ""} in network mk-ingress-addon-legacy-779504: {Iface:virbr1 ExpiryTime:2024-02-29 18:54:18 +0000 UTC Type:0 Mac:52:54:00:1b:79:48 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ingress-addon-legacy-779504 Clientid:01:52:54:00:1b:79:48}
	I0229 17:54:27.151833   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | domain ingress-addon-legacy-779504 has defined IP address 192.168.39.104 and MAC address 52:54:00:1b:79:48 in network mk-ingress-addon-legacy-779504
	I0229 17:54:27.151964   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetSSHPort
	I0229 17:54:27.152073   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:79:48", ip: ""} in network mk-ingress-addon-legacy-779504: {Iface:virbr1 ExpiryTime:2024-02-29 18:54:18 +0000 UTC Type:0 Mac:52:54:00:1b:79:48 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ingress-addon-legacy-779504 Clientid:01:52:54:00:1b:79:48}
	I0229 17:54:27.152103   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | domain ingress-addon-legacy-779504 has defined IP address 192.168.39.104 and MAC address 52:54:00:1b:79:48 in network mk-ingress-addon-legacy-779504
	I0229 17:54:27.152161   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetSSHKeyPath
	I0229 17:54:27.152249   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetSSHPort
	I0229 17:54:27.152320   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetSSHUsername
	I0229 17:54:27.152394   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetSSHKeyPath
	I0229 17:54:27.152483   22893 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/ingress-addon-legacy-779504/id_rsa Username:docker}
	I0229 17:54:27.152540   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetSSHUsername
	I0229 17:54:27.152677   22893 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/ingress-addon-legacy-779504/id_rsa Username:docker}
	I0229 17:54:27.240335   22893 ssh_runner.go:195] Run: systemctl --version
	I0229 17:54:27.260239   22893 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0229 17:54:27.429788   22893 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 17:54:27.436612   22893 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 17:54:27.436685   22893 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 17:54:27.453974   22893 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 17:54:27.454010   22893 start.go:475] detecting cgroup driver to use...
	I0229 17:54:27.454075   22893 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 17:54:27.470970   22893 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 17:54:27.485287   22893 docker.go:217] disabling cri-docker service (if available) ...
	I0229 17:54:27.485350   22893 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 17:54:27.499656   22893 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 17:54:27.514259   22893 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 17:54:27.634224   22893 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 17:54:27.784276   22893 docker.go:233] disabling docker service ...
	I0229 17:54:27.784362   22893 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 17:54:27.801316   22893 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 17:54:27.816683   22893 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 17:54:27.960294   22893 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 17:54:28.087859   22893 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 17:54:28.103411   22893 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 17:54:28.124248   22893 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0229 17:54:28.124305   22893 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 17:54:28.135690   22893 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0229 17:54:28.135744   22893 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 17:54:28.146513   22893 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 17:54:28.157088   22893 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 17:54:28.170107   22893 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 17:54:28.181833   22893 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 17:54:28.191378   22893 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 17:54:28.191426   22893 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0229 17:54:28.205264   22893 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 17:54:28.215512   22893 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 17:54:28.341150   22893 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0229 17:54:28.488547   22893 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0229 17:54:28.488628   22893 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0229 17:54:28.494640   22893 start.go:543] Will wait 60s for crictl version
	I0229 17:54:28.494734   22893 ssh_runner.go:195] Run: which crictl
	I0229 17:54:28.498863   22893 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 17:54:28.543089   22893 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0229 17:54:28.543159   22893 ssh_runner.go:195] Run: crio --version
	I0229 17:54:28.572568   22893 ssh_runner.go:195] Run: crio --version
	I0229 17:54:28.606070   22893 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.29.1 ...
	I0229 17:54:28.607203   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetIP
	I0229 17:54:28.609747   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | domain ingress-addon-legacy-779504 has defined MAC address 52:54:00:1b:79:48 in network mk-ingress-addon-legacy-779504
	I0229 17:54:28.610071   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:79:48", ip: ""} in network mk-ingress-addon-legacy-779504: {Iface:virbr1 ExpiryTime:2024-02-29 18:54:18 +0000 UTC Type:0 Mac:52:54:00:1b:79:48 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ingress-addon-legacy-779504 Clientid:01:52:54:00:1b:79:48}
	I0229 17:54:28.610112   22893 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | domain ingress-addon-legacy-779504 has defined IP address 192.168.39.104 and MAC address 52:54:00:1b:79:48 in network mk-ingress-addon-legacy-779504
	I0229 17:54:28.610284   22893 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0229 17:54:28.614909   22893 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 17:54:28.629760   22893 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0229 17:54:28.629809   22893 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 17:54:28.667625   22893 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0229 17:54:28.667686   22893 ssh_runner.go:195] Run: which lz4
	I0229 17:54:28.672080   22893 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0229 17:54:28.672191   22893 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0229 17:54:28.676805   22893 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 17:54:28.676838   22893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (495439307 bytes)
	I0229 17:54:30.511601   22893 crio.go:444] Took 1.839424 seconds to copy over tarball
	I0229 17:54:30.511655   22893 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 17:54:33.466821   22893 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.955137304s)
	I0229 17:54:33.466850   22893 crio.go:451] Took 2.955226 seconds to extract the tarball
	I0229 17:54:33.466861   22893 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 17:54:33.513936   22893 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 17:54:33.559616   22893 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0229 17:54:33.559640   22893 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0229 17:54:33.559687   22893 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 17:54:33.559714   22893 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0229 17:54:33.559761   22893 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0229 17:54:33.559779   22893 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0229 17:54:33.559790   22893 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0229 17:54:33.559941   22893 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0229 17:54:33.559765   22893 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0229 17:54:33.559767   22893 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0229 17:54:33.561207   22893 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0229 17:54:33.561265   22893 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 17:54:33.561277   22893 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0229 17:54:33.561287   22893 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0229 17:54:33.561340   22893 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0229 17:54:33.561440   22893 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0229 17:54:33.561462   22893 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0229 17:54:33.561501   22893 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0229 17:54:33.769016   22893 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0229 17:54:33.785474   22893 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0229 17:54:33.822440   22893 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0229 17:54:33.823429   22893 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0229 17:54:33.823465   22893 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0229 17:54:33.823504   22893 ssh_runner.go:195] Run: which crictl
	I0229 17:54:33.859009   22893 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0229 17:54:33.870642   22893 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0229 17:54:33.871731   22893 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I0229 17:54:33.871748   22893 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I0229 17:54:33.871770   22893 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I0229 17:54:33.871779   22893 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0229 17:54:33.871807   22893 ssh_runner.go:195] Run: which crictl
	I0229 17:54:33.871812   22893 ssh_runner.go:195] Run: which crictl
	I0229 17:54:33.871812   22893 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0229 17:54:33.876547   22893 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0229 17:54:33.882483   22893 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0229 17:54:33.974118   22893 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0229 17:54:33.974164   22893 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0229 17:54:33.974210   22893 ssh_runner.go:195] Run: which crictl
	I0229 17:54:33.984769   22893 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I0229 17:54:33.984818   22893 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0229 17:54:33.984852   22893 ssh_runner.go:195] Run: which crictl
	I0229 17:54:33.997874   22893 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0229 17:54:33.997931   22893 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0229 17:54:33.998001   22893 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0229 17:54:34.031386   22893 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I0229 17:54:34.031436   22893 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0229 17:54:34.031490   22893 ssh_runner.go:195] Run: which crictl
	I0229 17:54:34.046027   22893 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I0229 17:54:34.046071   22893 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0229 17:54:34.046101   22893 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0229 17:54:34.046115   22893 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0229 17:54:34.046110   22893 ssh_runner.go:195] Run: which crictl
	I0229 17:54:34.089418   22893 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I0229 17:54:34.091286   22893 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I0229 17:54:34.091395   22893 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0229 17:54:34.146904   22893 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0229 17:54:34.146920   22893 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I0229 17:54:34.147168   22893 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0229 17:54:34.161822   22893 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I0229 17:54:34.192559   22893 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I0229 17:54:34.481693   22893 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 17:54:34.628889   22893 cache_images.go:92] LoadImages completed in 1.06923123s
	W0229 17:54:34.629002   22893 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0229 17:54:34.629080   22893 ssh_runner.go:195] Run: crio config
	I0229 17:54:34.677553   22893 cni.go:84] Creating CNI manager for ""
	I0229 17:54:34.677573   22893 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 17:54:34.677593   22893 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 17:54:34.677615   22893 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.104 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-779504 NodeName:ingress-addon-legacy-779504 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.104"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.104 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cert
s/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0229 17:54:34.677757   22893 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.104
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-779504"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.104
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.104"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 17:54:34.677849   22893 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=ingress-addon-legacy-779504 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.104
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-779504 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 17:54:34.677916   22893 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0229 17:54:34.688594   22893 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 17:54:34.688662   22893 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 17:54:34.698899   22893 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (436 bytes)
	I0229 17:54:34.717809   22893 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0229 17:54:34.736454   22893 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2129 bytes)
	I0229 17:54:34.755095   22893 ssh_runner.go:195] Run: grep 192.168.39.104	control-plane.minikube.internal$ /etc/hosts
	I0229 17:54:34.759433   22893 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.104	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 17:54:34.773271   22893 certs.go:56] Setting up /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/ingress-addon-legacy-779504 for IP: 192.168.39.104
	I0229 17:54:34.773310   22893 certs.go:190] acquiring lock for shared ca certs: {Name:mk3505d2468da66ac20dc3ebf913782cecf1ba0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 17:54:34.773459   22893 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18259-6428/.minikube/ca.key
	I0229 17:54:34.773502   22893 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.key
	I0229 17:54:34.773549   22893 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/ingress-addon-legacy-779504/client.key
	I0229 17:54:34.773560   22893 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/ingress-addon-legacy-779504/client.crt with IP's: []
	I0229 17:54:34.986118   22893 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/ingress-addon-legacy-779504/client.crt ...
	I0229 17:54:34.986142   22893 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/ingress-addon-legacy-779504/client.crt: {Name:mkd1dc16aa0ec33323f6f39670c1ae09d8ebcf10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 17:54:34.986299   22893 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/ingress-addon-legacy-779504/client.key ...
	I0229 17:54:34.986311   22893 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/ingress-addon-legacy-779504/client.key: {Name:mk62231a2a702e0f8260a0e6ef981293e8c68989 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 17:54:34.986377   22893 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/ingress-addon-legacy-779504/apiserver.key.a10f9b59
	I0229 17:54:34.986415   22893 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/ingress-addon-legacy-779504/apiserver.crt.a10f9b59 with IP's: [192.168.39.104 10.96.0.1 127.0.0.1 10.0.0.1]
	I0229 17:54:35.140547   22893 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/ingress-addon-legacy-779504/apiserver.crt.a10f9b59 ...
	I0229 17:54:35.140575   22893 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/ingress-addon-legacy-779504/apiserver.crt.a10f9b59: {Name:mk9c68661c50c43060a422f0eed57de070ae0f83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 17:54:35.140726   22893 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/ingress-addon-legacy-779504/apiserver.key.a10f9b59 ...
	I0229 17:54:35.140739   22893 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/ingress-addon-legacy-779504/apiserver.key.a10f9b59: {Name:mk17511889fde2eb8ba855c3a5f4346dedd2156a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 17:54:35.140803   22893 certs.go:337] copying /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/ingress-addon-legacy-779504/apiserver.crt.a10f9b59 -> /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/ingress-addon-legacy-779504/apiserver.crt
	I0229 17:54:35.140893   22893 certs.go:341] copying /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/ingress-addon-legacy-779504/apiserver.key.a10f9b59 -> /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/ingress-addon-legacy-779504/apiserver.key
	I0229 17:54:35.140949   22893 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/ingress-addon-legacy-779504/proxy-client.key
	I0229 17:54:35.140962   22893 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/ingress-addon-legacy-779504/proxy-client.crt with IP's: []
	I0229 17:54:35.394153   22893 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/ingress-addon-legacy-779504/proxy-client.crt ...
	I0229 17:54:35.394184   22893 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/ingress-addon-legacy-779504/proxy-client.crt: {Name:mk2b53b4ba43e57a884192ce16033a41ded33e78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 17:54:35.394337   22893 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/ingress-addon-legacy-779504/proxy-client.key ...
	I0229 17:54:35.394350   22893 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/ingress-addon-legacy-779504/proxy-client.key: {Name:mk348d6d0d03942c780f7686e5243c10e92cad33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 17:54:35.394417   22893 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/ingress-addon-legacy-779504/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0229 17:54:35.394434   22893 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/ingress-addon-legacy-779504/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0229 17:54:35.394444   22893 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/ingress-addon-legacy-779504/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0229 17:54:35.394454   22893 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/ingress-addon-legacy-779504/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0229 17:54:35.394466   22893 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0229 17:54:35.394476   22893 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6428/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0229 17:54:35.394492   22893 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0229 17:54:35.394502   22893 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0229 17:54:35.394557   22893 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651.pem (1338 bytes)
	W0229 17:54:35.394595   22893 certs.go:433] ignoring /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651_empty.pem, impossibly tiny 0 bytes
	I0229 17:54:35.394606   22893 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem (1675 bytes)
	I0229 17:54:35.394633   22893 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem (1082 bytes)
	I0229 17:54:35.394655   22893 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem (1123 bytes)
	I0229 17:54:35.394682   22893 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem (1675 bytes)
	I0229 17:54:35.394727   22893 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem (1708 bytes)
	I0229 17:54:35.394759   22893 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0229 17:54:35.394777   22893 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651.pem -> /usr/share/ca-certificates/13651.pem
	I0229 17:54:35.394790   22893 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem -> /usr/share/ca-certificates/136512.pem
	I0229 17:54:35.395361   22893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/ingress-addon-legacy-779504/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 17:54:35.424072   22893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/ingress-addon-legacy-779504/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0229 17:54:35.450898   22893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/ingress-addon-legacy-779504/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 17:54:35.478314   22893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/ingress-addon-legacy-779504/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 17:54:35.506159   22893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 17:54:35.532112   22893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0229 17:54:35.557916   22893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 17:54:35.584131   22893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0229 17:54:35.610693   22893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 17:54:35.636572   22893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651.pem --> /usr/share/ca-certificates/13651.pem (1338 bytes)
	I0229 17:54:35.661845   22893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem --> /usr/share/ca-certificates/136512.pem (1708 bytes)
	I0229 17:54:35.688205   22893 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 17:54:35.706981   22893 ssh_runner.go:195] Run: openssl version
	I0229 17:54:35.713245   22893 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 17:54:35.725117   22893 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 17:54:35.730221   22893 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 17:40 /usr/share/ca-certificates/minikubeCA.pem
	I0229 17:54:35.730288   22893 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 17:54:35.736484   22893 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 17:54:35.747796   22893 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13651.pem && ln -fs /usr/share/ca-certificates/13651.pem /etc/ssl/certs/13651.pem"
	I0229 17:54:35.758949   22893 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13651.pem
	I0229 17:54:35.763764   22893 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 17:49 /usr/share/ca-certificates/13651.pem
	I0229 17:54:35.763803   22893 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13651.pem
	I0229 17:54:35.769627   22893 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13651.pem /etc/ssl/certs/51391683.0"
	I0229 17:54:35.780842   22893 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136512.pem && ln -fs /usr/share/ca-certificates/136512.pem /etc/ssl/certs/136512.pem"
	I0229 17:54:35.792286   22893 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136512.pem
	I0229 17:54:35.797450   22893 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 17:49 /usr/share/ca-certificates/136512.pem
	I0229 17:54:35.797497   22893 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136512.pem
	I0229 17:54:35.803860   22893 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136512.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 17:54:35.815800   22893 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 17:54:35.820501   22893 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 17:54:35.820550   22893 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-779504 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.18.20 ClusterName:ingress-addon-legacy-779504 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.104 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 17:54:35.820615   22893 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0229 17:54:35.820656   22893 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 17:54:35.858858   22893 cri.go:89] found id: ""
	I0229 17:54:35.858919   22893 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 17:54:35.869497   22893 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 17:54:35.879591   22893 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 17:54:35.891593   22893 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 17:54:35.891633   22893 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 17:54:35.952011   22893 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0229 17:54:35.952176   22893 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 17:54:36.090055   22893 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 17:54:36.090180   22893 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 17:54:36.090264   22893 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 17:54:36.309267   22893 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 17:54:36.310168   22893 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 17:54:36.310216   22893 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0229 17:54:36.438820   22893 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 17:54:36.441030   22893 out.go:204]   - Generating certificates and keys ...
	I0229 17:54:36.441119   22893 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 17:54:36.441193   22893 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 17:54:36.624822   22893 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0229 17:54:36.730607   22893 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0229 17:54:37.117899   22893 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0229 17:54:37.275451   22893 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0229 17:54:37.330724   22893 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0229 17:54:37.331149   22893 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-779504 localhost] and IPs [192.168.39.104 127.0.0.1 ::1]
	I0229 17:54:37.546817   22893 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0229 17:54:37.547010   22893 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-779504 localhost] and IPs [192.168.39.104 127.0.0.1 ::1]
	I0229 17:54:37.667644   22893 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0229 17:54:37.816847   22893 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0229 17:54:37.937153   22893 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0229 17:54:37.937429   22893 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 17:54:38.102944   22893 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 17:54:38.161909   22893 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 17:54:38.412114   22893 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 17:54:38.509521   22893 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 17:54:38.510236   22893 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 17:54:38.512382   22893 out.go:204]   - Booting up control plane ...
	I0229 17:54:38.512504   22893 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 17:54:38.517279   22893 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 17:54:38.518497   22893 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 17:54:38.520872   22893 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 17:54:38.528783   22893 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 17:55:18.519521   22893 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 17:55:18.519601   22893 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 17:55:18.519829   22893 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 17:55:23.520610   22893 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 17:55:23.520823   22893 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 17:55:33.520136   22893 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 17:55:33.520351   22893 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 17:55:53.520027   22893 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 17:55:53.520214   22893 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 17:56:33.522185   22893 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 17:56:33.522436   22893 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 17:56:33.522450   22893 kubeadm.go:322] 
	I0229 17:56:33.522481   22893 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0229 17:56:33.522515   22893 kubeadm.go:322] 		timed out waiting for the condition
	I0229 17:56:33.522520   22893 kubeadm.go:322] 
	I0229 17:56:33.522585   22893 kubeadm.go:322] 	This error is likely caused by:
	I0229 17:56:33.522653   22893 kubeadm.go:322] 		- The kubelet is not running
	I0229 17:56:33.522823   22893 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 17:56:33.522835   22893 kubeadm.go:322] 
	I0229 17:56:33.522986   22893 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 17:56:33.523061   22893 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0229 17:56:33.523116   22893 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0229 17:56:33.523127   22893 kubeadm.go:322] 
	I0229 17:56:33.523343   22893 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 17:56:33.523472   22893 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0229 17:56:33.523486   22893 kubeadm.go:322] 
	I0229 17:56:33.523633   22893 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0229 17:56:33.523738   22893 kubeadm.go:322] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0229 17:56:33.523836   22893 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0229 17:56:33.523945   22893 kubeadm.go:322] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0229 17:56:33.523965   22893 kubeadm.go:322] 
	I0229 17:56:33.525336   22893 kubeadm.go:322] W0229 17:54:35.934483     923 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0229 17:56:33.525480   22893 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 17:56:33.525665   22893 kubeadm.go:322] W0229 17:54:38.502194     923 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0229 17:56:33.525859   22893 kubeadm.go:322] W0229 17:54:38.503564     923 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0229 17:56:33.525980   22893 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 17:56:33.526071   22893 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0229 17:56:33.526236   22893 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-779504 localhost] and IPs [192.168.39.104 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-779504 localhost] and IPs [192.168.39.104 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
	W0229 17:54:35.934483     923 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0229 17:54:38.502194     923 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0229 17:54:38.503564     923 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-779504 localhost] and IPs [192.168.39.104 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-779504 localhost] and IPs [192.168.39.104 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
	W0229 17:54:35.934483     923 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0229 17:54:38.502194     923 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0229 17:54:38.503564     923 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0229 17:56:33.526323   22893 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0229 17:56:33.984718   22893 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 17:56:34.001646   22893 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 17:56:34.013343   22893 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 17:56:34.013388   22893 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 17:56:34.066571   22893 kubeadm.go:322] W0229 17:56:34.062165    2400 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0229 17:56:34.208862   22893 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 17:56:35.158466   22893 kubeadm.go:322] W0229 17:56:35.154286    2400 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0229 17:56:35.159648   22893 kubeadm.go:322] W0229 17:56:35.155552    2400 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0229 17:58:30.166152   22893 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 17:58:30.166266   22893 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 17:58:30.167635   22893 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0229 17:58:30.167681   22893 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 17:58:30.167748   22893 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 17:58:30.167828   22893 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 17:58:30.167906   22893 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 17:58:30.167993   22893 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 17:58:30.168069   22893 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 17:58:30.168103   22893 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0229 17:58:30.168223   22893 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 17:58:30.170168   22893 out.go:204]   - Generating certificates and keys ...
	I0229 17:58:30.170262   22893 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 17:58:30.170341   22893 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 17:58:30.170453   22893 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 17:58:30.170532   22893 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 17:58:30.170613   22893 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 17:58:30.170706   22893 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 17:58:30.170787   22893 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 17:58:30.170873   22893 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 17:58:30.170990   22893 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 17:58:30.171107   22893 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 17:58:30.171173   22893 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 17:58:30.171260   22893 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 17:58:30.171345   22893 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 17:58:30.171404   22893 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 17:58:30.171496   22893 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 17:58:30.171586   22893 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 17:58:30.171659   22893 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 17:58:30.173290   22893 out.go:204]   - Booting up control plane ...
	I0229 17:58:30.173377   22893 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 17:58:30.173444   22893 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 17:58:30.173511   22893 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 17:58:30.173597   22893 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 17:58:30.173782   22893 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 17:58:30.173836   22893 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 17:58:30.173908   22893 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 17:58:30.174084   22893 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 17:58:30.174184   22893 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 17:58:30.174423   22893 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 17:58:30.174517   22893 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 17:58:30.174791   22893 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 17:58:30.174898   22893 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 17:58:30.175174   22893 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 17:58:30.175272   22893 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 17:58:30.175436   22893 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 17:58:30.175445   22893 kubeadm.go:322] 
	I0229 17:58:30.175481   22893 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0229 17:58:30.175516   22893 kubeadm.go:322] 		timed out waiting for the condition
	I0229 17:58:30.175522   22893 kubeadm.go:322] 
	I0229 17:58:30.175565   22893 kubeadm.go:322] 	This error is likely caused by:
	I0229 17:58:30.175607   22893 kubeadm.go:322] 		- The kubelet is not running
	I0229 17:58:30.175716   22893 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 17:58:30.175728   22893 kubeadm.go:322] 
	I0229 17:58:30.175864   22893 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 17:58:30.175912   22893 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0229 17:58:30.175965   22893 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0229 17:58:30.175975   22893 kubeadm.go:322] 
	I0229 17:58:30.176120   22893 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 17:58:30.176238   22893 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0229 17:58:30.176253   22893 kubeadm.go:322] 
	I0229 17:58:30.176429   22893 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0229 17:58:30.176517   22893 kubeadm.go:322] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0229 17:58:30.176637   22893 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0229 17:58:30.176711   22893 kubeadm.go:322] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0229 17:58:30.176752   22893 kubeadm.go:322] 
	I0229 17:58:30.176789   22893 kubeadm.go:406] StartCluster complete in 3m54.356240596s
	I0229 17:58:30.176836   22893 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 17:58:30.176890   22893 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 17:58:30.231100   22893 cri.go:89] found id: ""
	I0229 17:58:30.231123   22893 logs.go:276] 0 containers: []
	W0229 17:58:30.231134   22893 logs.go:278] No container was found matching "kube-apiserver"
	I0229 17:58:30.231142   22893 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 17:58:30.231196   22893 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 17:58:30.301935   22893 cri.go:89] found id: ""
	I0229 17:58:30.301966   22893 logs.go:276] 0 containers: []
	W0229 17:58:30.301976   22893 logs.go:278] No container was found matching "etcd"
	I0229 17:58:30.301985   22893 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 17:58:30.302044   22893 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 17:58:30.362074   22893 cri.go:89] found id: ""
	I0229 17:58:30.362100   22893 logs.go:276] 0 containers: []
	W0229 17:58:30.362111   22893 logs.go:278] No container was found matching "coredns"
	I0229 17:58:30.362118   22893 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 17:58:30.362179   22893 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 17:58:30.401139   22893 cri.go:89] found id: ""
	I0229 17:58:30.401165   22893 logs.go:276] 0 containers: []
	W0229 17:58:30.401185   22893 logs.go:278] No container was found matching "kube-scheduler"
	I0229 17:58:30.401192   22893 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 17:58:30.401252   22893 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 17:58:30.439576   22893 cri.go:89] found id: ""
	I0229 17:58:30.439599   22893 logs.go:276] 0 containers: []
	W0229 17:58:30.439609   22893 logs.go:278] No container was found matching "kube-proxy"
	I0229 17:58:30.439616   22893 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 17:58:30.439679   22893 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 17:58:30.475720   22893 cri.go:89] found id: ""
	I0229 17:58:30.475746   22893 logs.go:276] 0 containers: []
	W0229 17:58:30.475753   22893 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 17:58:30.475759   22893 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 17:58:30.475808   22893 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 17:58:30.517338   22893 cri.go:89] found id: ""
	I0229 17:58:30.517365   22893 logs.go:276] 0 containers: []
	W0229 17:58:30.517376   22893 logs.go:278] No container was found matching "kindnet"
	I0229 17:58:30.517386   22893 logs.go:123] Gathering logs for CRI-O ...
	I0229 17:58:30.517401   22893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 17:58:30.612410   22893 logs.go:123] Gathering logs for container status ...
	I0229 17:58:30.612445   22893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 17:58:30.663715   22893 logs.go:123] Gathering logs for kubelet ...
	I0229 17:58:30.663746   22893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 17:58:30.725666   22893 logs.go:123] Gathering logs for dmesg ...
	I0229 17:58:30.725699   22893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 17:58:30.740808   22893 logs.go:123] Gathering logs for describe nodes ...
	I0229 17:58:30.740837   22893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 17:58:30.811476   22893 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0229 17:58:30.811553   22893 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
	W0229 17:56:34.062165    2400 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0229 17:56:35.154286    2400 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0229 17:56:35.155552    2400 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0229 17:58:30.811582   22893 out.go:239] * 
	* 
	W0229 17:58:30.811646   22893 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
	W0229 17:56:34.062165    2400 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0229 17:56:35.154286    2400 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0229 17:56:35.155552    2400 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
	W0229 17:56:34.062165    2400 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0229 17:56:35.154286    2400 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0229 17:56:35.155552    2400 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 17:58:30.811671   22893 out.go:239] * 
	* 
	W0229 17:58:30.812529   22893 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 17:58:30.815737   22893 out.go:177] 
	W0229 17:58:30.817232   22893 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
	W0229 17:56:34.062165    2400 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0229 17:56:35.154286    2400 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0229 17:56:35.155552    2400 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
	W0229 17:56:34.062165    2400 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0229 17:56:35.154286    2400 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0229 17:56:35.155552    2400 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 17:58:30.817311   22893 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0229 17:58:30.817354   22893 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0229 17:58:30.818946   22893 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-linux-amd64 start -p ingress-addon-legacy-779504 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio" : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (286.10s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (97.07s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-779504 addons enable ingress --alsologtostderr -v=5
E0229 17:59:08.588479   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/functional-531072/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-779504 addons enable ingress --alsologtostderr -v=5: exit status 10 (1m36.832697283s)

                                                
                                                
-- stdout --
	* ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	* Verifying ingress addon...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 17:58:30.930216   23804 out.go:291] Setting OutFile to fd 1 ...
	I0229 17:58:30.930389   23804 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 17:58:30.930399   23804 out.go:304] Setting ErrFile to fd 2...
	I0229 17:58:30.930403   23804 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 17:58:30.930595   23804 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6428/.minikube/bin
	I0229 17:58:30.930881   23804 mustload.go:65] Loading cluster: ingress-addon-legacy-779504
	I0229 17:58:30.932085   23804 config.go:182] Loaded profile config "ingress-addon-legacy-779504": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0229 17:58:30.932202   23804 addons.go:597] checking whether the cluster is paused
	I0229 17:58:30.932439   23804 config.go:182] Loaded profile config "ingress-addon-legacy-779504": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0229 17:58:30.932459   23804 host.go:66] Checking if "ingress-addon-legacy-779504" exists ...
	I0229 17:58:30.932886   23804 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 17:58:30.932924   23804 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 17:58:30.947548   23804 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40387
	I0229 17:58:30.947950   23804 main.go:141] libmachine: () Calling .GetVersion
	I0229 17:58:30.948454   23804 main.go:141] libmachine: Using API Version  1
	I0229 17:58:30.948473   23804 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 17:58:30.948871   23804 main.go:141] libmachine: () Calling .GetMachineName
	I0229 17:58:30.949064   23804 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetState
	I0229 17:58:30.950638   23804 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .DriverName
	I0229 17:58:30.950853   23804 ssh_runner.go:195] Run: systemctl --version
	I0229 17:58:30.950876   23804 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetSSHHostname
	I0229 17:58:30.953011   23804 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | domain ingress-addon-legacy-779504 has defined MAC address 52:54:00:1b:79:48 in network mk-ingress-addon-legacy-779504
	I0229 17:58:30.953439   23804 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:79:48", ip: ""} in network mk-ingress-addon-legacy-779504: {Iface:virbr1 ExpiryTime:2024-02-29 18:54:18 +0000 UTC Type:0 Mac:52:54:00:1b:79:48 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ingress-addon-legacy-779504 Clientid:01:52:54:00:1b:79:48}
	I0229 17:58:30.953468   23804 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | domain ingress-addon-legacy-779504 has defined IP address 192.168.39.104 and MAC address 52:54:00:1b:79:48 in network mk-ingress-addon-legacy-779504
	I0229 17:58:30.953579   23804 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetSSHPort
	I0229 17:58:30.953740   23804 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetSSHKeyPath
	I0229 17:58:30.953888   23804 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetSSHUsername
	I0229 17:58:30.953999   23804 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/ingress-addon-legacy-779504/id_rsa Username:docker}
	I0229 17:58:31.038039   23804 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0229 17:58:31.038125   23804 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 17:58:31.077012   23804 cri.go:89] found id: ""
	I0229 17:58:31.077082   23804 main.go:141] libmachine: Making call to close driver server
	I0229 17:58:31.077100   23804 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .Close
	I0229 17:58:31.077493   23804 main.go:141] libmachine: Successfully made call to close driver server
	I0229 17:58:31.077517   23804 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | Closing plugin on server side
	I0229 17:58:31.077522   23804 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 17:58:31.079910   23804 out.go:177] * ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0229 17:58:31.081332   23804 config.go:182] Loaded profile config "ingress-addon-legacy-779504": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0229 17:58:31.081347   23804 addons.go:69] Setting ingress=true in profile "ingress-addon-legacy-779504"
	I0229 17:58:31.081354   23804 addons.go:234] Setting addon ingress=true in "ingress-addon-legacy-779504"
	I0229 17:58:31.081392   23804 host.go:66] Checking if "ingress-addon-legacy-779504" exists ...
	I0229 17:58:31.081681   23804 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 17:58:31.081720   23804 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 17:58:31.095534   23804 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44651
	I0229 17:58:31.095968   23804 main.go:141] libmachine: () Calling .GetVersion
	I0229 17:58:31.096440   23804 main.go:141] libmachine: Using API Version  1
	I0229 17:58:31.096463   23804 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 17:58:31.096850   23804 main.go:141] libmachine: () Calling .GetMachineName
	I0229 17:58:31.097449   23804 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 17:58:31.097494   23804 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 17:58:31.111585   23804 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45411
	I0229 17:58:31.111957   23804 main.go:141] libmachine: () Calling .GetVersion
	I0229 17:58:31.112454   23804 main.go:141] libmachine: Using API Version  1
	I0229 17:58:31.112473   23804 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 17:58:31.112872   23804 main.go:141] libmachine: () Calling .GetMachineName
	I0229 17:58:31.113070   23804 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetState
	I0229 17:58:31.114590   23804 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .DriverName
	I0229 17:58:31.116496   23804 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	I0229 17:58:31.117963   23804 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0229 17:58:31.119355   23804 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0229 17:58:31.120997   23804 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0229 17:58:31.121016   23804 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (15618 bytes)
	I0229 17:58:31.121032   23804 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetSSHHostname
	I0229 17:58:31.124050   23804 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | domain ingress-addon-legacy-779504 has defined MAC address 52:54:00:1b:79:48 in network mk-ingress-addon-legacy-779504
	I0229 17:58:31.124555   23804 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:79:48", ip: ""} in network mk-ingress-addon-legacy-779504: {Iface:virbr1 ExpiryTime:2024-02-29 18:54:18 +0000 UTC Type:0 Mac:52:54:00:1b:79:48 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ingress-addon-legacy-779504 Clientid:01:52:54:00:1b:79:48}
	I0229 17:58:31.124583   23804 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | domain ingress-addon-legacy-779504 has defined IP address 192.168.39.104 and MAC address 52:54:00:1b:79:48 in network mk-ingress-addon-legacy-779504
	I0229 17:58:31.124735   23804 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetSSHPort
	I0229 17:58:31.124932   23804 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetSSHKeyPath
	I0229 17:58:31.125055   23804 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetSSHUsername
	I0229 17:58:31.125185   23804 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/ingress-addon-legacy-779504/id_rsa Username:docker}
	I0229 17:58:31.218901   23804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 17:58:31.302548   23804 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:58:31.302580   23804 retry.go:31] will retry after 249.169073ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:58:31.552041   23804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 17:58:31.617798   23804 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:58:31.617826   23804 retry.go:31] will retry after 561.66968ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:58:32.180693   23804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 17:58:32.249959   23804 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:58:32.249995   23804 retry.go:31] will retry after 797.932748ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:58:33.049057   23804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 17:58:33.121673   23804 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:58:33.121702   23804 retry.go:31] will retry after 932.449033ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:58:34.054872   23804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 17:58:34.124131   23804 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:58:34.124157   23804 retry.go:31] will retry after 1.418175598s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:58:35.543761   23804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 17:58:35.623420   23804 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:58:35.623452   23804 retry.go:31] will retry after 2.116203252s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:58:37.741048   23804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 17:58:37.824256   23804 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:58:37.824295   23804 retry.go:31] will retry after 4.031960819s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:58:41.856957   23804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 17:58:41.920954   23804 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:58:41.921007   23804 retry.go:31] will retry after 5.669513842s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:58:47.592005   23804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 17:58:47.660933   23804 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:58:47.660980   23804 retry.go:31] will retry after 6.186869009s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:58:53.850542   23804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 17:58:53.919812   23804 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:58:53.919854   23804 retry.go:31] will retry after 13.096378233s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:59:07.017248   23804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 17:59:07.097632   23804 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:59:07.097671   23804 retry.go:31] will retry after 13.702014556s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:59:20.804488   23804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 17:59:20.872293   23804 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:59:20.872339   23804 retry.go:31] will retry after 24.493859186s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:59:45.369832   23804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 17:59:45.441304   23804 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:59:45.441332   23804 retry.go:31] will retry after 22.180913675s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:00:07.623861   23804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 18:00:07.693784   23804 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:00:07.693892   23804 main.go:141] libmachine: Making call to close driver server
	I0229 18:00:07.693909   23804 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .Close
	I0229 18:00:07.694182   23804 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | Closing plugin on server side
	I0229 18:00:07.694188   23804 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:00:07.694214   23804 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:00:07.694225   23804 main.go:141] libmachine: Making call to close driver server
	I0229 18:00:07.694236   23804 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .Close
	I0229 18:00:07.694460   23804 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | Closing plugin on server side
	I0229 18:00:07.694480   23804 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:00:07.694493   23804 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:00:07.694517   23804 addons.go:470] Verifying addon ingress=true in "ingress-addon-legacy-779504"
	I0229 18:00:07.697136   23804 out.go:177] * Verifying ingress addon...
	I0229 18:00:07.699697   23804 out.go:177] 
	W0229 18:00:07.701073   23804 out.go:239] X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-779504" does not exist: client config: context "ingress-addon-legacy-779504" does not exist]
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-779504" does not exist: client config: context "ingress-addon-legacy-779504" does not exist]
	W0229 18:00:07.701094   23804 out.go:239] * 
	* 
	W0229 18:00:07.702942   23804 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 18:00:07.704402   23804 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:71: failed to enable ingress addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-779504 -n ingress-addon-legacy-779504
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-779504 -n ingress-addon-legacy-779504: exit status 6 (233.350713ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 18:00:07.925666   24055 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-779504" does not appear in /home/jenkins/minikube-integration/18259-6428/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-779504" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (97.07s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (88.86s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-779504 addons enable ingress-dns --alsologtostderr -v=5
E0229 18:00:30.511498   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/functional-531072/client.crt: no such file or directory
ingress_addon_legacy_test.go:79: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-779504 addons enable ingress-dns --alsologtostderr -v=5: exit status 10 (1m28.613794278s)

                                                
                                                
-- stdout --
	* ingress-dns is an addon maintained by minikube. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 18:00:07.991132   24085 out.go:291] Setting OutFile to fd 1 ...
	I0229 18:00:07.991555   24085 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:00:07.991571   24085 out.go:304] Setting ErrFile to fd 2...
	I0229 18:00:07.991578   24085 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:00:07.992093   24085 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6428/.minikube/bin
	I0229 18:00:07.992785   24085 mustload.go:65] Loading cluster: ingress-addon-legacy-779504
	I0229 18:00:07.993159   24085 config.go:182] Loaded profile config "ingress-addon-legacy-779504": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0229 18:00:07.993180   24085 addons.go:597] checking whether the cluster is paused
	I0229 18:00:07.993261   24085 config.go:182] Loaded profile config "ingress-addon-legacy-779504": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0229 18:00:07.993274   24085 host.go:66] Checking if "ingress-addon-legacy-779504" exists ...
	I0229 18:00:07.993598   24085 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 18:00:07.993640   24085 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:00:08.008453   24085 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33191
	I0229 18:00:08.008869   24085 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:00:08.009466   24085 main.go:141] libmachine: Using API Version  1
	I0229 18:00:08.009488   24085 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:00:08.009848   24085 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:00:08.010059   24085 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetState
	I0229 18:00:08.011643   24085 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .DriverName
	I0229 18:00:08.011848   24085 ssh_runner.go:195] Run: systemctl --version
	I0229 18:00:08.011869   24085 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetSSHHostname
	I0229 18:00:08.013714   24085 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | domain ingress-addon-legacy-779504 has defined MAC address 52:54:00:1b:79:48 in network mk-ingress-addon-legacy-779504
	I0229 18:00:08.014078   24085 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:79:48", ip: ""} in network mk-ingress-addon-legacy-779504: {Iface:virbr1 ExpiryTime:2024-02-29 18:54:18 +0000 UTC Type:0 Mac:52:54:00:1b:79:48 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ingress-addon-legacy-779504 Clientid:01:52:54:00:1b:79:48}
	I0229 18:00:08.014100   24085 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | domain ingress-addon-legacy-779504 has defined IP address 192.168.39.104 and MAC address 52:54:00:1b:79:48 in network mk-ingress-addon-legacy-779504
	I0229 18:00:08.014215   24085 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetSSHPort
	I0229 18:00:08.014423   24085 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetSSHKeyPath
	I0229 18:00:08.014540   24085 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetSSHUsername
	I0229 18:00:08.014672   24085 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/ingress-addon-legacy-779504/id_rsa Username:docker}
	I0229 18:00:08.097940   24085 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0229 18:00:08.098032   24085 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 18:00:08.141473   24085 cri.go:89] found id: ""
	I0229 18:00:08.141517   24085 main.go:141] libmachine: Making call to close driver server
	I0229 18:00:08.141526   24085 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .Close
	I0229 18:00:08.141819   24085 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | Closing plugin on server side
	I0229 18:00:08.141865   24085 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:00:08.141878   24085 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:00:08.144363   24085 out.go:177] * ingress-dns is an addon maintained by minikube. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0229 18:00:08.146039   24085 config.go:182] Loaded profile config "ingress-addon-legacy-779504": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0229 18:00:08.146056   24085 addons.go:69] Setting ingress-dns=true in profile "ingress-addon-legacy-779504"
	I0229 18:00:08.146066   24085 addons.go:234] Setting addon ingress-dns=true in "ingress-addon-legacy-779504"
	I0229 18:00:08.146102   24085 host.go:66] Checking if "ingress-addon-legacy-779504" exists ...
	I0229 18:00:08.146357   24085 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 18:00:08.146391   24085 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:00:08.160672   24085 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45725
	I0229 18:00:08.161095   24085 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:00:08.161574   24085 main.go:141] libmachine: Using API Version  1
	I0229 18:00:08.161604   24085 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:00:08.161969   24085 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:00:08.162433   24085 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 18:00:08.162507   24085 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:00:08.176473   24085 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44983
	I0229 18:00:08.176900   24085 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:00:08.177346   24085 main.go:141] libmachine: Using API Version  1
	I0229 18:00:08.177369   24085 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:00:08.177653   24085 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:00:08.177811   24085 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetState
	I0229 18:00:08.179244   24085 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .DriverName
	I0229 18:00:08.182018   24085 out.go:177]   - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	I0229 18:00:08.183520   24085 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0229 18:00:08.183536   24085 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2434 bytes)
	I0229 18:00:08.183549   24085 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetSSHHostname
	I0229 18:00:08.186179   24085 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | domain ingress-addon-legacy-779504 has defined MAC address 52:54:00:1b:79:48 in network mk-ingress-addon-legacy-779504
	I0229 18:00:08.186576   24085 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:79:48", ip: ""} in network mk-ingress-addon-legacy-779504: {Iface:virbr1 ExpiryTime:2024-02-29 18:54:18 +0000 UTC Type:0 Mac:52:54:00:1b:79:48 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ingress-addon-legacy-779504 Clientid:01:52:54:00:1b:79:48}
	I0229 18:00:08.186621   24085 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | domain ingress-addon-legacy-779504 has defined IP address 192.168.39.104 and MAC address 52:54:00:1b:79:48 in network mk-ingress-addon-legacy-779504
	I0229 18:00:08.186715   24085 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetSSHPort
	I0229 18:00:08.186888   24085 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetSSHKeyPath
	I0229 18:00:08.187045   24085 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .GetSSHUsername
	I0229 18:00:08.187174   24085 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/ingress-addon-legacy-779504/id_rsa Username:docker}
	I0229 18:00:08.282709   24085 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 18:00:08.349196   24085 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:00:08.349224   24085 retry.go:31] will retry after 157.683479ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:00:08.507683   24085 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 18:00:08.593293   24085 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:00:08.593338   24085 retry.go:31] will retry after 212.065922ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:00:08.805756   24085 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 18:00:08.872322   24085 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:00:08.872364   24085 retry.go:31] will retry after 499.244653ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:00:09.372083   24085 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 18:00:09.436074   24085 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:00:09.436104   24085 retry.go:31] will retry after 473.510714ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:00:09.909799   24085 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 18:00:09.974265   24085 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:00:09.974296   24085 retry.go:31] will retry after 1.278076635s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:00:11.253842   24085 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 18:00:11.317452   24085 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:00:11.317486   24085 retry.go:31] will retry after 1.137642496s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:00:12.455815   24085 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 18:00:12.523435   24085 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:00:12.523470   24085 retry.go:31] will retry after 1.434230348s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:00:13.958318   24085 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 18:00:14.034610   24085 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:00:14.034639   24085 retry.go:31] will retry after 4.640825616s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:00:18.675986   24085 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 18:00:18.745362   24085 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:00:18.745405   24085 retry.go:31] will retry after 3.310331012s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:00:22.058918   24085 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 18:00:22.134953   24085 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:00:22.134982   24085 retry.go:31] will retry after 9.236509412s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:00:31.374597   24085 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 18:00:31.440998   24085 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:00:31.441040   24085 retry.go:31] will retry after 9.149162553s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:00:40.592296   24085 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 18:00:40.655476   24085 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:00:40.655504   24085 retry.go:31] will retry after 12.077994119s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:00:52.737680   24085 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 18:00:52.839304   24085 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:00:52.839342   24085 retry.go:31] will retry after 43.606379964s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:01:36.446339   24085 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 18:01:36.545112   24085 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:01:36.545163   24085 main.go:141] libmachine: Making call to close driver server
	I0229 18:01:36.545186   24085 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .Close
	I0229 18:01:36.545468   24085 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:01:36.545488   24085 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:01:36.545498   24085 main.go:141] libmachine: Making call to close driver server
	I0229 18:01:36.545507   24085 main.go:141] libmachine: (ingress-addon-legacy-779504) Calling .Close
	I0229 18:01:36.545794   24085 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:01:36.545822   24085 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:01:36.545793   24085 main.go:141] libmachine: (ingress-addon-legacy-779504) DBG | Closing plugin on server side
	I0229 18:01:36.548351   24085 out.go:177] 
	W0229 18:01:36.549976   24085 out.go:239] X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	W0229 18:01:36.550001   24085 out.go:239] * 
	* 
	W0229 18:01:36.551908   24085 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 18:01:36.553173   24085 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:80: failed to enable ingress-dns addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-779504 -n ingress-addon-legacy-779504
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-779504 -n ingress-addon-legacy-779504: exit status 6 (243.70519ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 18:01:36.784621   24312 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-779504" does not appear in /home/jenkins/minikube-integration/18259-6428/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-779504" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (88.86s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (0.23s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:201: failed to get Kubernetes client: <nil>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-779504 -n ingress-addon-legacy-779504
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-779504 -n ingress-addon-legacy-779504: exit status 6 (232.247794ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 18:01:37.015728   24342 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-779504" does not appear in /home/jenkins/minikube-integration/18259-6428/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-779504" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (0.23s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (690.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-051105
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-051105
E0229 18:12:43.785468   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/addons-848237/client.crt: no such file or directory
E0229 18:12:46.665466   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/functional-531072/client.crt: no such file or directory
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-051105: exit status 82 (2m0.262661127s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-051105"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:320: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-051105" : exit status 82
multinode_test.go:323: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-051105 --wait=true -v=8 --alsologtostderr
E0229 18:14:09.713879   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/functional-531072/client.crt: no such file or directory
E0229 18:17:43.786036   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/addons-848237/client.crt: no such file or directory
E0229 18:17:46.662975   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/functional-531072/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-linux-amd64 start -p multinode-051105 --wait=true -v=8 --alsologtostderr: (9m27.228056251s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-051105
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-051105 -n multinode-051105
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051105 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-051105 logs -n 25: (1.63396667s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-051105 ssh -n                                                                 | multinode-051105 | jenkins | v1.32.0 | 29 Feb 24 18:10 UTC | 29 Feb 24 18:10 UTC |
	|         | multinode-051105-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-051105 cp multinode-051105-m02:/home/docker/cp-test.txt                       | multinode-051105 | jenkins | v1.32.0 | 29 Feb 24 18:10 UTC | 29 Feb 24 18:10 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2896101559/001/cp-test_multinode-051105-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-051105 ssh -n                                                                 | multinode-051105 | jenkins | v1.32.0 | 29 Feb 24 18:10 UTC | 29 Feb 24 18:10 UTC |
	|         | multinode-051105-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-051105 cp multinode-051105-m02:/home/docker/cp-test.txt                       | multinode-051105 | jenkins | v1.32.0 | 29 Feb 24 18:10 UTC | 29 Feb 24 18:10 UTC |
	|         | multinode-051105:/home/docker/cp-test_multinode-051105-m02_multinode-051105.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-051105 ssh -n                                                                 | multinode-051105 | jenkins | v1.32.0 | 29 Feb 24 18:10 UTC | 29 Feb 24 18:10 UTC |
	|         | multinode-051105-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-051105 ssh -n multinode-051105 sudo cat                                       | multinode-051105 | jenkins | v1.32.0 | 29 Feb 24 18:10 UTC | 29 Feb 24 18:10 UTC |
	|         | /home/docker/cp-test_multinode-051105-m02_multinode-051105.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-051105 cp multinode-051105-m02:/home/docker/cp-test.txt                       | multinode-051105 | jenkins | v1.32.0 | 29 Feb 24 18:10 UTC | 29 Feb 24 18:10 UTC |
	|         | multinode-051105-m03:/home/docker/cp-test_multinode-051105-m02_multinode-051105-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-051105 ssh -n                                                                 | multinode-051105 | jenkins | v1.32.0 | 29 Feb 24 18:10 UTC | 29 Feb 24 18:10 UTC |
	|         | multinode-051105-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-051105 ssh -n multinode-051105-m03 sudo cat                                   | multinode-051105 | jenkins | v1.32.0 | 29 Feb 24 18:10 UTC | 29 Feb 24 18:10 UTC |
	|         | /home/docker/cp-test_multinode-051105-m02_multinode-051105-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-051105 cp testdata/cp-test.txt                                                | multinode-051105 | jenkins | v1.32.0 | 29 Feb 24 18:10 UTC | 29 Feb 24 18:10 UTC |
	|         | multinode-051105-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-051105 ssh -n                                                                 | multinode-051105 | jenkins | v1.32.0 | 29 Feb 24 18:10 UTC | 29 Feb 24 18:10 UTC |
	|         | multinode-051105-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-051105 cp multinode-051105-m03:/home/docker/cp-test.txt                       | multinode-051105 | jenkins | v1.32.0 | 29 Feb 24 18:10 UTC | 29 Feb 24 18:10 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2896101559/001/cp-test_multinode-051105-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-051105 ssh -n                                                                 | multinode-051105 | jenkins | v1.32.0 | 29 Feb 24 18:10 UTC | 29 Feb 24 18:10 UTC |
	|         | multinode-051105-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-051105 cp multinode-051105-m03:/home/docker/cp-test.txt                       | multinode-051105 | jenkins | v1.32.0 | 29 Feb 24 18:10 UTC | 29 Feb 24 18:10 UTC |
	|         | multinode-051105:/home/docker/cp-test_multinode-051105-m03_multinode-051105.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-051105 ssh -n                                                                 | multinode-051105 | jenkins | v1.32.0 | 29 Feb 24 18:10 UTC | 29 Feb 24 18:10 UTC |
	|         | multinode-051105-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-051105 ssh -n multinode-051105 sudo cat                                       | multinode-051105 | jenkins | v1.32.0 | 29 Feb 24 18:10 UTC | 29 Feb 24 18:10 UTC |
	|         | /home/docker/cp-test_multinode-051105-m03_multinode-051105.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-051105 cp multinode-051105-m03:/home/docker/cp-test.txt                       | multinode-051105 | jenkins | v1.32.0 | 29 Feb 24 18:10 UTC | 29 Feb 24 18:10 UTC |
	|         | multinode-051105-m02:/home/docker/cp-test_multinode-051105-m03_multinode-051105-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-051105 ssh -n                                                                 | multinode-051105 | jenkins | v1.32.0 | 29 Feb 24 18:10 UTC | 29 Feb 24 18:10 UTC |
	|         | multinode-051105-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-051105 ssh -n multinode-051105-m02 sudo cat                                   | multinode-051105 | jenkins | v1.32.0 | 29 Feb 24 18:10 UTC | 29 Feb 24 18:10 UTC |
	|         | /home/docker/cp-test_multinode-051105-m03_multinode-051105-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-051105 node stop m03                                                          | multinode-051105 | jenkins | v1.32.0 | 29 Feb 24 18:10 UTC | 29 Feb 24 18:10 UTC |
	| node    | multinode-051105 node start                                                             | multinode-051105 | jenkins | v1.32.0 | 29 Feb 24 18:10 UTC | 29 Feb 24 18:10 UTC |
	|         | m03 --alsologtostderr                                                                   |                  |         |         |                     |                     |
	| node    | list -p multinode-051105                                                                | multinode-051105 | jenkins | v1.32.0 | 29 Feb 24 18:10 UTC |                     |
	| stop    | -p multinode-051105                                                                     | multinode-051105 | jenkins | v1.32.0 | 29 Feb 24 18:10 UTC |                     |
	| start   | -p multinode-051105                                                                     | multinode-051105 | jenkins | v1.32.0 | 29 Feb 24 18:12 UTC | 29 Feb 24 18:22 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-051105                                                                | multinode-051105 | jenkins | v1.32.0 | 29 Feb 24 18:22 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 18:12:48
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 18:12:48.123938   30631 out.go:291] Setting OutFile to fd 1 ...
	I0229 18:12:48.124088   30631 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:12:48.124098   30631 out.go:304] Setting ErrFile to fd 2...
	I0229 18:12:48.124102   30631 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:12:48.124304   30631 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6428/.minikube/bin
	I0229 18:12:48.124882   30631 out.go:298] Setting JSON to false
	I0229 18:12:48.125792   30631 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3312,"bootTime":1709227056,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 18:12:48.125862   30631 start.go:139] virtualization: kvm guest
	I0229 18:12:48.128261   30631 out.go:177] * [multinode-051105] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 18:12:48.129815   30631 out.go:177]   - MINIKUBE_LOCATION=18259
	I0229 18:12:48.131214   30631 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 18:12:48.129819   30631 notify.go:220] Checking for updates...
	I0229 18:12:48.132481   30631 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18259-6428/kubeconfig
	I0229 18:12:48.133795   30631 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6428/.minikube
	I0229 18:12:48.135051   30631 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 18:12:48.136316   30631 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 18:12:48.137970   30631 config.go:182] Loaded profile config "multinode-051105": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 18:12:48.138055   30631 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 18:12:48.138475   30631 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 18:12:48.138519   30631 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:12:48.152923   30631 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38285
	I0229 18:12:48.153359   30631 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:12:48.153899   30631 main.go:141] libmachine: Using API Version  1
	I0229 18:12:48.153917   30631 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:12:48.154181   30631 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:12:48.154304   30631 main.go:141] libmachine: (multinode-051105) Calling .DriverName
	I0229 18:12:48.188077   30631 out.go:177] * Using the kvm2 driver based on existing profile
	I0229 18:12:48.189457   30631 start.go:299] selected driver: kvm2
	I0229 18:12:48.189472   30631 start.go:903] validating driver "kvm2" against &{Name:multinode-051105 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.4 ClusterName:multinode-051105 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.104 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.78 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:fals
e ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:12:48.189620   30631 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 18:12:48.189934   30631 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:12:48.190036   30631 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18259-6428/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 18:12:48.203757   30631 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 18:12:48.204395   30631 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0229 18:12:48.204470   30631 cni.go:84] Creating CNI manager for ""
	I0229 18:12:48.204483   30631 cni.go:136] 3 nodes found, recommending kindnet
	I0229 18:12:48.204498   30631 start_flags.go:323] config:
	{Name:multinode-051105 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-051105 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.104 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.78 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-pro
visioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:12:48.204736   30631 iso.go:125] acquiring lock: {Name:mk4b55cee5696fff78a1389276e4e011ad655e56 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:12:48.207111   30631 out.go:177] * Starting control plane node multinode-051105 in cluster multinode-051105
	I0229 18:12:48.208233   30631 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0229 18:12:48.208257   30631 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0229 18:12:48.208264   30631 cache.go:56] Caching tarball of preloaded images
	I0229 18:12:48.208332   30631 preload.go:174] Found /home/jenkins/minikube-integration/18259-6428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0229 18:12:48.208343   30631 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0229 18:12:48.208463   30631 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/multinode-051105/config.json ...
	I0229 18:12:48.208671   30631 start.go:365] acquiring machines lock for multinode-051105: {Name:mk08b242c6b6b7cd4a6843a7d3821a8b435540dd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 18:12:48.208712   30631 start.go:369] acquired machines lock for "multinode-051105" in 24.527µs
	I0229 18:12:48.208731   30631 start.go:96] Skipping create...Using existing machine configuration
	I0229 18:12:48.208741   30631 fix.go:54] fixHost starting: 
	I0229 18:12:48.209020   30631 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 18:12:48.209051   30631 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:12:48.222574   30631 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42483
	I0229 18:12:48.222975   30631 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:12:48.223436   30631 main.go:141] libmachine: Using API Version  1
	I0229 18:12:48.223453   30631 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:12:48.223746   30631 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:12:48.223912   30631 main.go:141] libmachine: (multinode-051105) Calling .DriverName
	I0229 18:12:48.224060   30631 main.go:141] libmachine: (multinode-051105) Calling .GetState
	I0229 18:12:48.225378   30631 fix.go:102] recreateIfNeeded on multinode-051105: state=Running err=<nil>
	W0229 18:12:48.225413   30631 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 18:12:48.227146   30631 out.go:177] * Updating the running kvm2 "multinode-051105" VM ...
	I0229 18:12:48.228383   30631 machine.go:88] provisioning docker machine ...
	I0229 18:12:48.228402   30631 main.go:141] libmachine: (multinode-051105) Calling .DriverName
	I0229 18:12:48.228588   30631 main.go:141] libmachine: (multinode-051105) Calling .GetMachineName
	I0229 18:12:48.228760   30631 buildroot.go:166] provisioning hostname "multinode-051105"
	I0229 18:12:48.228776   30631 main.go:141] libmachine: (multinode-051105) Calling .GetMachineName
	I0229 18:12:48.228936   30631 main.go:141] libmachine: (multinode-051105) Calling .GetSSHHostname
	I0229 18:12:48.231176   30631 main.go:141] libmachine: (multinode-051105) DBG | domain multinode-051105 has defined MAC address 52:54:00:58:1f:e6 in network mk-multinode-051105
	I0229 18:12:48.231553   30631 main.go:141] libmachine: (multinode-051105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:1f:e6", ip: ""} in network mk-multinode-051105: {Iface:virbr1 ExpiryTime:2024-02-29 19:06:35 +0000 UTC Type:0 Mac:52:54:00:58:1f:e6 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:multinode-051105 Clientid:01:52:54:00:58:1f:e6}
	I0229 18:12:48.231579   30631 main.go:141] libmachine: (multinode-051105) DBG | domain multinode-051105 has defined IP address 192.168.39.200 and MAC address 52:54:00:58:1f:e6 in network mk-multinode-051105
	I0229 18:12:48.231698   30631 main.go:141] libmachine: (multinode-051105) Calling .GetSSHPort
	I0229 18:12:48.231831   30631 main.go:141] libmachine: (multinode-051105) Calling .GetSSHKeyPath
	I0229 18:12:48.231951   30631 main.go:141] libmachine: (multinode-051105) Calling .GetSSHKeyPath
	I0229 18:12:48.232081   30631 main.go:141] libmachine: (multinode-051105) Calling .GetSSHUsername
	I0229 18:12:48.232207   30631 main.go:141] libmachine: Using SSH client type: native
	I0229 18:12:48.232369   30631 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0229 18:12:48.232381   30631 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-051105 && echo "multinode-051105" | sudo tee /etc/hostname
	I0229 18:13:06.611275   30631 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.200:22: connect: no route to host
	I0229 18:13:12.691436   30631 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.200:22: connect: no route to host
	I0229 18:13:15.763333   30631 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.200:22: connect: no route to host
	I0229 18:13:21.843309   30631 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.200:22: connect: no route to host
	I0229 18:13:24.915232   30631 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.200:22: connect: no route to host
	I0229 18:13:30.999288   30631 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.200:22: connect: no route to host
	I0229 18:13:34.067333   30631 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.200:22: connect: no route to host
	I0229 18:13:40.147327   30631 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.200:22: connect: no route to host
	I0229 18:13:43.219227   30631 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.200:22: connect: no route to host
	I0229 18:13:49.299268   30631 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.200:22: connect: no route to host
	I0229 18:13:52.371408   30631 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.200:22: connect: no route to host
	I0229 18:13:58.451315   30631 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.200:22: connect: no route to host
	I0229 18:14:01.523323   30631 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.200:22: connect: no route to host
	I0229 18:14:07.603315   30631 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.200:22: connect: no route to host
	I0229 18:14:10.675325   30631 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.200:22: connect: no route to host
	I0229 18:14:16.755249   30631 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.200:22: connect: no route to host
	I0229 18:14:19.827223   30631 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.200:22: connect: no route to host
	I0229 18:14:25.907270   30631 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.200:22: connect: no route to host
	I0229 18:14:28.979319   30631 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.200:22: connect: no route to host
	I0229 18:14:35.059272   30631 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.200:22: connect: no route to host
	I0229 18:14:38.131334   30631 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.200:22: connect: no route to host
	I0229 18:14:44.211290   30631 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.200:22: connect: no route to host
	I0229 18:14:47.283342   30631 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.200:22: connect: no route to host
	I0229 18:14:53.363350   30631 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.200:22: connect: no route to host
	I0229 18:14:56.435294   30631 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.200:22: connect: no route to host
	I0229 18:15:02.515273   30631 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.200:22: connect: no route to host
	I0229 18:15:05.587358   30631 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.200:22: connect: no route to host
	I0229 18:15:11.667314   30631 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.200:22: connect: no route to host
	I0229 18:15:14.739261   30631 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.200:22: connect: no route to host
	I0229 18:15:20.819278   30631 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.200:22: connect: no route to host
	I0229 18:15:23.891273   30631 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.200:22: connect: no route to host
	I0229 18:15:29.971242   30631 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.200:22: connect: no route to host
	I0229 18:15:33.043281   30631 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.200:22: connect: no route to host
	I0229 18:15:39.123273   30631 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.200:22: connect: no route to host
	I0229 18:15:42.195246   30631 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.200:22: connect: no route to host
	I0229 18:15:48.275280   30631 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.200:22: connect: no route to host
	I0229 18:15:51.347341   30631 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.200:22: connect: no route to host
	I0229 18:15:57.427312   30631 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.200:22: connect: no route to host
	I0229 18:16:00.499276   30631 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.200:22: connect: no route to host
	I0229 18:16:06.579347   30631 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.200:22: connect: no route to host
	I0229 18:16:09.651238   30631 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.200:22: connect: no route to host
	I0229 18:16:15.731276   30631 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.200:22: connect: no route to host
	I0229 18:16:18.803258   30631 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.200:22: connect: no route to host
	I0229 18:16:24.883358   30631 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.200:22: connect: no route to host
	I0229 18:16:27.955236   30631 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.200:22: connect: no route to host
	I0229 18:16:34.035275   30631 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.200:22: connect: no route to host
	I0229 18:16:37.107293   30631 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.200:22: connect: no route to host
	I0229 18:16:43.187335   30631 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.200:22: connect: no route to host
	I0229 18:16:46.259316   30631 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.200:22: connect: no route to host
	I0229 18:16:52.339305   30631 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.200:22: connect: no route to host
	I0229 18:16:55.411305   30631 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.200:22: connect: no route to host
	I0229 18:17:01.491298   30631 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.200:22: connect: no route to host
	I0229 18:17:04.563327   30631 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.200:22: connect: no route to host
	I0229 18:17:10.643310   30631 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.200:22: connect: no route to host
	I0229 18:17:13.715284   30631 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.200:22: connect: no route to host
	I0229 18:17:19.795283   30631 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.200:22: connect: no route to host
	I0229 18:17:22.867268   30631 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.200:22: connect: no route to host
	I0229 18:17:28.947278   30631 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.200:22: connect: no route to host
	I0229 18:17:32.019279   30631 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.200:22: connect: no route to host
	I0229 18:17:38.099298   30631 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.200:22: connect: no route to host
	I0229 18:17:41.101595   30631 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 18:17:41.101632   30631 main.go:141] libmachine: (multinode-051105) Calling .GetSSHHostname
	I0229 18:17:41.103580   30631 machine.go:91] provisioned docker machine in 4m52.875180696s
	I0229 18:17:41.103615   30631 fix.go:56] fixHost completed within 4m52.894875868s
	I0229 18:17:41.103621   30631 start.go:83] releasing machines lock for "multinode-051105", held for 4m52.894897649s
	W0229 18:17:41.103633   30631 start.go:694] error starting host: provision: host is not running
	W0229 18:17:41.103719   30631 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0229 18:17:41.103729   30631 start.go:709] Will try again in 5 seconds ...
	I0229 18:17:46.106641   30631 start.go:365] acquiring machines lock for multinode-051105: {Name:mk08b242c6b6b7cd4a6843a7d3821a8b435540dd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 18:17:46.106738   30631 start.go:369] acquired machines lock for "multinode-051105" in 60.088µs
	I0229 18:17:46.106758   30631 start.go:96] Skipping create...Using existing machine configuration
	I0229 18:17:46.106765   30631 fix.go:54] fixHost starting: 
	I0229 18:17:46.107064   30631 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 18:17:46.107092   30631 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:17:46.121610   30631 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33417
	I0229 18:17:46.122009   30631 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:17:46.122468   30631 main.go:141] libmachine: Using API Version  1
	I0229 18:17:46.122494   30631 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:17:46.122818   30631 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:17:46.123012   30631 main.go:141] libmachine: (multinode-051105) Calling .DriverName
	I0229 18:17:46.123161   30631 main.go:141] libmachine: (multinode-051105) Calling .GetState
	I0229 18:17:46.124759   30631 fix.go:102] recreateIfNeeded on multinode-051105: state=Stopped err=<nil>
	I0229 18:17:46.124777   30631 main.go:141] libmachine: (multinode-051105) Calling .DriverName
	W0229 18:17:46.124942   30631 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 18:17:46.128323   30631 out.go:177] * Restarting existing kvm2 VM for "multinode-051105" ...
	I0229 18:17:46.129738   30631 main.go:141] libmachine: (multinode-051105) Calling .Start
	I0229 18:17:46.129917   30631 main.go:141] libmachine: (multinode-051105) Ensuring networks are active...
	I0229 18:17:46.130672   30631 main.go:141] libmachine: (multinode-051105) Ensuring network default is active
	I0229 18:17:46.131113   30631 main.go:141] libmachine: (multinode-051105) Ensuring network mk-multinode-051105 is active
	I0229 18:17:46.131556   30631 main.go:141] libmachine: (multinode-051105) Getting domain xml...
	I0229 18:17:46.132244   30631 main.go:141] libmachine: (multinode-051105) Creating domain...
	I0229 18:17:47.316133   30631 main.go:141] libmachine: (multinode-051105) Waiting to get IP...
	I0229 18:17:47.316836   30631 main.go:141] libmachine: (multinode-051105) DBG | domain multinode-051105 has defined MAC address 52:54:00:58:1f:e6 in network mk-multinode-051105
	I0229 18:17:47.317274   30631 main.go:141] libmachine: (multinode-051105) DBG | unable to find current IP address of domain multinode-051105 in network mk-multinode-051105
	I0229 18:17:47.317345   30631 main.go:141] libmachine: (multinode-051105) DBG | I0229 18:17:47.317256   31852 retry.go:31] will retry after 261.336551ms: waiting for machine to come up
	I0229 18:17:47.579884   30631 main.go:141] libmachine: (multinode-051105) DBG | domain multinode-051105 has defined MAC address 52:54:00:58:1f:e6 in network mk-multinode-051105
	I0229 18:17:47.580282   30631 main.go:141] libmachine: (multinode-051105) DBG | unable to find current IP address of domain multinode-051105 in network mk-multinode-051105
	I0229 18:17:47.580308   30631 main.go:141] libmachine: (multinode-051105) DBG | I0229 18:17:47.580248   31852 retry.go:31] will retry after 387.761152ms: waiting for machine to come up
	I0229 18:17:47.969820   30631 main.go:141] libmachine: (multinode-051105) DBG | domain multinode-051105 has defined MAC address 52:54:00:58:1f:e6 in network mk-multinode-051105
	I0229 18:17:47.970176   30631 main.go:141] libmachine: (multinode-051105) DBG | unable to find current IP address of domain multinode-051105 in network mk-multinode-051105
	I0229 18:17:47.970197   30631 main.go:141] libmachine: (multinode-051105) DBG | I0229 18:17:47.970139   31852 retry.go:31] will retry after 340.457736ms: waiting for machine to come up
	I0229 18:17:48.312851   30631 main.go:141] libmachine: (multinode-051105) DBG | domain multinode-051105 has defined MAC address 52:54:00:58:1f:e6 in network mk-multinode-051105
	I0229 18:17:48.313317   30631 main.go:141] libmachine: (multinode-051105) DBG | unable to find current IP address of domain multinode-051105 in network mk-multinode-051105
	I0229 18:17:48.313345   30631 main.go:141] libmachine: (multinode-051105) DBG | I0229 18:17:48.313258   31852 retry.go:31] will retry after 541.71181ms: waiting for machine to come up
	I0229 18:17:48.856914   30631 main.go:141] libmachine: (multinode-051105) DBG | domain multinode-051105 has defined MAC address 52:54:00:58:1f:e6 in network mk-multinode-051105
	I0229 18:17:48.857348   30631 main.go:141] libmachine: (multinode-051105) DBG | unable to find current IP address of domain multinode-051105 in network mk-multinode-051105
	I0229 18:17:48.857380   30631 main.go:141] libmachine: (multinode-051105) DBG | I0229 18:17:48.857297   31852 retry.go:31] will retry after 578.387276ms: waiting for machine to come up
	I0229 18:17:49.436778   30631 main.go:141] libmachine: (multinode-051105) DBG | domain multinode-051105 has defined MAC address 52:54:00:58:1f:e6 in network mk-multinode-051105
	I0229 18:17:49.437163   30631 main.go:141] libmachine: (multinode-051105) DBG | unable to find current IP address of domain multinode-051105 in network mk-multinode-051105
	I0229 18:17:49.437186   30631 main.go:141] libmachine: (multinode-051105) DBG | I0229 18:17:49.437119   31852 retry.go:31] will retry after 740.963516ms: waiting for machine to come up
	I0229 18:17:50.179960   30631 main.go:141] libmachine: (multinode-051105) DBG | domain multinode-051105 has defined MAC address 52:54:00:58:1f:e6 in network mk-multinode-051105
	I0229 18:17:50.180337   30631 main.go:141] libmachine: (multinode-051105) DBG | unable to find current IP address of domain multinode-051105 in network mk-multinode-051105
	I0229 18:17:50.180369   30631 main.go:141] libmachine: (multinode-051105) DBG | I0229 18:17:50.180294   31852 retry.go:31] will retry after 1.002641544s: waiting for machine to come up
	I0229 18:17:51.184787   30631 main.go:141] libmachine: (multinode-051105) DBG | domain multinode-051105 has defined MAC address 52:54:00:58:1f:e6 in network mk-multinode-051105
	I0229 18:17:51.185341   30631 main.go:141] libmachine: (multinode-051105) DBG | unable to find current IP address of domain multinode-051105 in network mk-multinode-051105
	I0229 18:17:51.185416   30631 main.go:141] libmachine: (multinode-051105) DBG | I0229 18:17:51.185325   31852 retry.go:31] will retry after 1.314430565s: waiting for machine to come up
	I0229 18:17:52.501111   30631 main.go:141] libmachine: (multinode-051105) DBG | domain multinode-051105 has defined MAC address 52:54:00:58:1f:e6 in network mk-multinode-051105
	I0229 18:17:52.501582   30631 main.go:141] libmachine: (multinode-051105) DBG | unable to find current IP address of domain multinode-051105 in network mk-multinode-051105
	I0229 18:17:52.501606   30631 main.go:141] libmachine: (multinode-051105) DBG | I0229 18:17:52.501551   31852 retry.go:31] will retry after 1.277202652s: waiting for machine to come up
	I0229 18:17:53.780844   30631 main.go:141] libmachine: (multinode-051105) DBG | domain multinode-051105 has defined MAC address 52:54:00:58:1f:e6 in network mk-multinode-051105
	I0229 18:17:53.781318   30631 main.go:141] libmachine: (multinode-051105) DBG | unable to find current IP address of domain multinode-051105 in network mk-multinode-051105
	I0229 18:17:53.781347   30631 main.go:141] libmachine: (multinode-051105) DBG | I0229 18:17:53.781257   31852 retry.go:31] will retry after 1.680186849s: waiting for machine to come up
	I0229 18:17:55.462600   30631 main.go:141] libmachine: (multinode-051105) DBG | domain multinode-051105 has defined MAC address 52:54:00:58:1f:e6 in network mk-multinode-051105
	I0229 18:17:55.463120   30631 main.go:141] libmachine: (multinode-051105) DBG | unable to find current IP address of domain multinode-051105 in network mk-multinode-051105
	I0229 18:17:55.463142   30631 main.go:141] libmachine: (multinode-051105) DBG | I0229 18:17:55.463076   31852 retry.go:31] will retry after 2.41454071s: waiting for machine to come up
	I0229 18:17:57.880012   30631 main.go:141] libmachine: (multinode-051105) DBG | domain multinode-051105 has defined MAC address 52:54:00:58:1f:e6 in network mk-multinode-051105
	I0229 18:17:57.880412   30631 main.go:141] libmachine: (multinode-051105) DBG | unable to find current IP address of domain multinode-051105 in network mk-multinode-051105
	I0229 18:17:57.880435   30631 main.go:141] libmachine: (multinode-051105) DBG | I0229 18:17:57.880373   31852 retry.go:31] will retry after 2.569930334s: waiting for machine to come up
	I0229 18:18:00.452954   30631 main.go:141] libmachine: (multinode-051105) DBG | domain multinode-051105 has defined MAC address 52:54:00:58:1f:e6 in network mk-multinode-051105
	I0229 18:18:00.453357   30631 main.go:141] libmachine: (multinode-051105) DBG | unable to find current IP address of domain multinode-051105 in network mk-multinode-051105
	I0229 18:18:00.453398   30631 main.go:141] libmachine: (multinode-051105) DBG | I0229 18:18:00.453330   31852 retry.go:31] will retry after 2.886438472s: waiting for machine to come up
	I0229 18:18:03.342509   30631 main.go:141] libmachine: (multinode-051105) DBG | domain multinode-051105 has defined MAC address 52:54:00:58:1f:e6 in network mk-multinode-051105
	I0229 18:18:03.342995   30631 main.go:141] libmachine: (multinode-051105) Found IP for machine: 192.168.39.200
	I0229 18:18:03.343012   30631 main.go:141] libmachine: (multinode-051105) Reserving static IP address...
	I0229 18:18:03.343045   30631 main.go:141] libmachine: (multinode-051105) DBG | domain multinode-051105 has current primary IP address 192.168.39.200 and MAC address 52:54:00:58:1f:e6 in network mk-multinode-051105
	I0229 18:18:03.343491   30631 main.go:141] libmachine: (multinode-051105) DBG | found host DHCP lease matching {name: "multinode-051105", mac: "52:54:00:58:1f:e6", ip: "192.168.39.200"} in network mk-multinode-051105: {Iface:virbr1 ExpiryTime:2024-02-29 19:17:58 +0000 UTC Type:0 Mac:52:54:00:58:1f:e6 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:multinode-051105 Clientid:01:52:54:00:58:1f:e6}
	I0229 18:18:03.343523   30631 main.go:141] libmachine: (multinode-051105) DBG | skip adding static IP to network mk-multinode-051105 - found existing host DHCP lease matching {name: "multinode-051105", mac: "52:54:00:58:1f:e6", ip: "192.168.39.200"}
	I0229 18:18:03.343538   30631 main.go:141] libmachine: (multinode-051105) Reserved static IP address: 192.168.39.200
	I0229 18:18:03.343553   30631 main.go:141] libmachine: (multinode-051105) Waiting for SSH to be available...
	I0229 18:18:03.343570   30631 main.go:141] libmachine: (multinode-051105) DBG | Getting to WaitForSSH function...
	I0229 18:18:03.345603   30631 main.go:141] libmachine: (multinode-051105) DBG | domain multinode-051105 has defined MAC address 52:54:00:58:1f:e6 in network mk-multinode-051105
	I0229 18:18:03.345918   30631 main.go:141] libmachine: (multinode-051105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:1f:e6", ip: ""} in network mk-multinode-051105: {Iface:virbr1 ExpiryTime:2024-02-29 19:17:58 +0000 UTC Type:0 Mac:52:54:00:58:1f:e6 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:multinode-051105 Clientid:01:52:54:00:58:1f:e6}
	I0229 18:18:03.345943   30631 main.go:141] libmachine: (multinode-051105) DBG | domain multinode-051105 has defined IP address 192.168.39.200 and MAC address 52:54:00:58:1f:e6 in network mk-multinode-051105
	I0229 18:18:03.346098   30631 main.go:141] libmachine: (multinode-051105) DBG | Using SSH client type: external
	I0229 18:18:03.346136   30631 main.go:141] libmachine: (multinode-051105) DBG | Using SSH private key: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/multinode-051105/id_rsa (-rw-------)
	I0229 18:18:03.346182   30631 main.go:141] libmachine: (multinode-051105) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.200 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18259-6428/.minikube/machines/multinode-051105/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 18:18:03.346207   30631 main.go:141] libmachine: (multinode-051105) DBG | About to run SSH command:
	I0229 18:18:03.346217   30631 main.go:141] libmachine: (multinode-051105) DBG | exit 0
	I0229 18:18:03.471175   30631 main.go:141] libmachine: (multinode-051105) DBG | SSH cmd err, output: <nil>: 
	I0229 18:18:03.471566   30631 main.go:141] libmachine: (multinode-051105) Calling .GetConfigRaw
	I0229 18:18:03.472232   30631 main.go:141] libmachine: (multinode-051105) Calling .GetIP
	I0229 18:18:03.474665   30631 main.go:141] libmachine: (multinode-051105) DBG | domain multinode-051105 has defined MAC address 52:54:00:58:1f:e6 in network mk-multinode-051105
	I0229 18:18:03.475132   30631 main.go:141] libmachine: (multinode-051105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:1f:e6", ip: ""} in network mk-multinode-051105: {Iface:virbr1 ExpiryTime:2024-02-29 19:17:58 +0000 UTC Type:0 Mac:52:54:00:58:1f:e6 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:multinode-051105 Clientid:01:52:54:00:58:1f:e6}
	I0229 18:18:03.475153   30631 main.go:141] libmachine: (multinode-051105) DBG | domain multinode-051105 has defined IP address 192.168.39.200 and MAC address 52:54:00:58:1f:e6 in network mk-multinode-051105
	I0229 18:18:03.475429   30631 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/multinode-051105/config.json ...
	I0229 18:18:03.475641   30631 machine.go:88] provisioning docker machine ...
	I0229 18:18:03.475659   30631 main.go:141] libmachine: (multinode-051105) Calling .DriverName
	I0229 18:18:03.475852   30631 main.go:141] libmachine: (multinode-051105) Calling .GetMachineName
	I0229 18:18:03.476014   30631 buildroot.go:166] provisioning hostname "multinode-051105"
	I0229 18:18:03.476034   30631 main.go:141] libmachine: (multinode-051105) Calling .GetMachineName
	I0229 18:18:03.476170   30631 main.go:141] libmachine: (multinode-051105) Calling .GetSSHHostname
	I0229 18:18:03.478330   30631 main.go:141] libmachine: (multinode-051105) DBG | domain multinode-051105 has defined MAC address 52:54:00:58:1f:e6 in network mk-multinode-051105
	I0229 18:18:03.478661   30631 main.go:141] libmachine: (multinode-051105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:1f:e6", ip: ""} in network mk-multinode-051105: {Iface:virbr1 ExpiryTime:2024-02-29 19:17:58 +0000 UTC Type:0 Mac:52:54:00:58:1f:e6 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:multinode-051105 Clientid:01:52:54:00:58:1f:e6}
	I0229 18:18:03.478693   30631 main.go:141] libmachine: (multinode-051105) DBG | domain multinode-051105 has defined IP address 192.168.39.200 and MAC address 52:54:00:58:1f:e6 in network mk-multinode-051105
	I0229 18:18:03.478825   30631 main.go:141] libmachine: (multinode-051105) Calling .GetSSHPort
	I0229 18:18:03.479018   30631 main.go:141] libmachine: (multinode-051105) Calling .GetSSHKeyPath
	I0229 18:18:03.479167   30631 main.go:141] libmachine: (multinode-051105) Calling .GetSSHKeyPath
	I0229 18:18:03.479289   30631 main.go:141] libmachine: (multinode-051105) Calling .GetSSHUsername
	I0229 18:18:03.479437   30631 main.go:141] libmachine: Using SSH client type: native
	I0229 18:18:03.479617   30631 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0229 18:18:03.479633   30631 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-051105 && echo "multinode-051105" | sudo tee /etc/hostname
	I0229 18:18:03.605233   30631 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-051105
	
	I0229 18:18:03.605256   30631 main.go:141] libmachine: (multinode-051105) Calling .GetSSHHostname
	I0229 18:18:03.607981   30631 main.go:141] libmachine: (multinode-051105) DBG | domain multinode-051105 has defined MAC address 52:54:00:58:1f:e6 in network mk-multinode-051105
	I0229 18:18:03.608340   30631 main.go:141] libmachine: (multinode-051105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:1f:e6", ip: ""} in network mk-multinode-051105: {Iface:virbr1 ExpiryTime:2024-02-29 19:17:58 +0000 UTC Type:0 Mac:52:54:00:58:1f:e6 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:multinode-051105 Clientid:01:52:54:00:58:1f:e6}
	I0229 18:18:03.608368   30631 main.go:141] libmachine: (multinode-051105) DBG | domain multinode-051105 has defined IP address 192.168.39.200 and MAC address 52:54:00:58:1f:e6 in network mk-multinode-051105
	I0229 18:18:03.608522   30631 main.go:141] libmachine: (multinode-051105) Calling .GetSSHPort
	I0229 18:18:03.608719   30631 main.go:141] libmachine: (multinode-051105) Calling .GetSSHKeyPath
	I0229 18:18:03.608889   30631 main.go:141] libmachine: (multinode-051105) Calling .GetSSHKeyPath
	I0229 18:18:03.609024   30631 main.go:141] libmachine: (multinode-051105) Calling .GetSSHUsername
	I0229 18:18:03.609166   30631 main.go:141] libmachine: Using SSH client type: native
	I0229 18:18:03.609358   30631 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0229 18:18:03.609391   30631 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-051105' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-051105/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-051105' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 18:18:03.725523   30631 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 18:18:03.725547   30631 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18259-6428/.minikube CaCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18259-6428/.minikube}
	I0229 18:18:03.725562   30631 buildroot.go:174] setting up certificates
	I0229 18:18:03.725571   30631 provision.go:83] configureAuth start
	I0229 18:18:03.725578   30631 main.go:141] libmachine: (multinode-051105) Calling .GetMachineName
	I0229 18:18:03.725822   30631 main.go:141] libmachine: (multinode-051105) Calling .GetIP
	I0229 18:18:03.728290   30631 main.go:141] libmachine: (multinode-051105) DBG | domain multinode-051105 has defined MAC address 52:54:00:58:1f:e6 in network mk-multinode-051105
	I0229 18:18:03.728591   30631 main.go:141] libmachine: (multinode-051105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:1f:e6", ip: ""} in network mk-multinode-051105: {Iface:virbr1 ExpiryTime:2024-02-29 19:17:58 +0000 UTC Type:0 Mac:52:54:00:58:1f:e6 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:multinode-051105 Clientid:01:52:54:00:58:1f:e6}
	I0229 18:18:03.728619   30631 main.go:141] libmachine: (multinode-051105) DBG | domain multinode-051105 has defined IP address 192.168.39.200 and MAC address 52:54:00:58:1f:e6 in network mk-multinode-051105
	I0229 18:18:03.728750   30631 main.go:141] libmachine: (multinode-051105) Calling .GetSSHHostname
	I0229 18:18:03.730836   30631 main.go:141] libmachine: (multinode-051105) DBG | domain multinode-051105 has defined MAC address 52:54:00:58:1f:e6 in network mk-multinode-051105
	I0229 18:18:03.731224   30631 main.go:141] libmachine: (multinode-051105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:1f:e6", ip: ""} in network mk-multinode-051105: {Iface:virbr1 ExpiryTime:2024-02-29 19:17:58 +0000 UTC Type:0 Mac:52:54:00:58:1f:e6 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:multinode-051105 Clientid:01:52:54:00:58:1f:e6}
	I0229 18:18:03.731244   30631 main.go:141] libmachine: (multinode-051105) DBG | domain multinode-051105 has defined IP address 192.168.39.200 and MAC address 52:54:00:58:1f:e6 in network mk-multinode-051105
	I0229 18:18:03.731386   30631 provision.go:138] copyHostCerts
	I0229 18:18:03.731416   30631 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem
	I0229 18:18:03.731451   30631 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem, removing ...
	I0229 18:18:03.731474   30631 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem
	I0229 18:18:03.731555   30631 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem (1082 bytes)
	I0229 18:18:03.731648   30631 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem
	I0229 18:18:03.731678   30631 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem, removing ...
	I0229 18:18:03.731688   30631 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem
	I0229 18:18:03.731726   30631 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem (1123 bytes)
	I0229 18:18:03.731785   30631 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem
	I0229 18:18:03.731809   30631 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem, removing ...
	I0229 18:18:03.731818   30631 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem
	I0229 18:18:03.731849   30631 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem (1675 bytes)
	I0229 18:18:03.731908   30631 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem org=jenkins.multinode-051105 san=[192.168.39.200 192.168.39.200 localhost 127.0.0.1 minikube multinode-051105]
	I0229 18:18:03.826462   30631 provision.go:172] copyRemoteCerts
	I0229 18:18:03.826525   30631 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 18:18:03.826552   30631 main.go:141] libmachine: (multinode-051105) Calling .GetSSHHostname
	I0229 18:18:03.829406   30631 main.go:141] libmachine: (multinode-051105) DBG | domain multinode-051105 has defined MAC address 52:54:00:58:1f:e6 in network mk-multinode-051105
	I0229 18:18:03.829742   30631 main.go:141] libmachine: (multinode-051105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:1f:e6", ip: ""} in network mk-multinode-051105: {Iface:virbr1 ExpiryTime:2024-02-29 19:17:58 +0000 UTC Type:0 Mac:52:54:00:58:1f:e6 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:multinode-051105 Clientid:01:52:54:00:58:1f:e6}
	I0229 18:18:03.829770   30631 main.go:141] libmachine: (multinode-051105) DBG | domain multinode-051105 has defined IP address 192.168.39.200 and MAC address 52:54:00:58:1f:e6 in network mk-multinode-051105
	I0229 18:18:03.829968   30631 main.go:141] libmachine: (multinode-051105) Calling .GetSSHPort
	I0229 18:18:03.830132   30631 main.go:141] libmachine: (multinode-051105) Calling .GetSSHKeyPath
	I0229 18:18:03.830263   30631 main.go:141] libmachine: (multinode-051105) Calling .GetSSHUsername
	I0229 18:18:03.830372   30631 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/multinode-051105/id_rsa Username:docker}
	I0229 18:18:03.917707   30631 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0229 18:18:03.917779   30631 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0229 18:18:03.947711   30631 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0229 18:18:03.947760   30631 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 18:18:03.977229   30631 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0229 18:18:03.977289   30631 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 18:18:04.006618   30631 provision.go:86] duration metric: configureAuth took 281.037099ms
	I0229 18:18:04.006655   30631 buildroot.go:189] setting minikube options for container-runtime
	I0229 18:18:04.006837   30631 config.go:182] Loaded profile config "multinode-051105": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 18:18:04.006906   30631 main.go:141] libmachine: (multinode-051105) Calling .GetSSHHostname
	I0229 18:18:04.009412   30631 main.go:141] libmachine: (multinode-051105) DBG | domain multinode-051105 has defined MAC address 52:54:00:58:1f:e6 in network mk-multinode-051105
	I0229 18:18:04.009772   30631 main.go:141] libmachine: (multinode-051105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:1f:e6", ip: ""} in network mk-multinode-051105: {Iface:virbr1 ExpiryTime:2024-02-29 19:17:58 +0000 UTC Type:0 Mac:52:54:00:58:1f:e6 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:multinode-051105 Clientid:01:52:54:00:58:1f:e6}
	I0229 18:18:04.009802   30631 main.go:141] libmachine: (multinode-051105) DBG | domain multinode-051105 has defined IP address 192.168.39.200 and MAC address 52:54:00:58:1f:e6 in network mk-multinode-051105
	I0229 18:18:04.009989   30631 main.go:141] libmachine: (multinode-051105) Calling .GetSSHPort
	I0229 18:18:04.010175   30631 main.go:141] libmachine: (multinode-051105) Calling .GetSSHKeyPath
	I0229 18:18:04.010291   30631 main.go:141] libmachine: (multinode-051105) Calling .GetSSHKeyPath
	I0229 18:18:04.010409   30631 main.go:141] libmachine: (multinode-051105) Calling .GetSSHUsername
	I0229 18:18:04.010564   30631 main.go:141] libmachine: Using SSH client type: native
	I0229 18:18:04.010761   30631 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0229 18:18:04.010777   30631 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0229 18:18:04.284141   30631 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0229 18:18:04.284160   30631 machine.go:91] provisioned docker machine in 808.506557ms
	I0229 18:18:04.284171   30631 start.go:300] post-start starting for "multinode-051105" (driver="kvm2")
	I0229 18:18:04.284182   30631 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 18:18:04.284201   30631 main.go:141] libmachine: (multinode-051105) Calling .DriverName
	I0229 18:18:04.284529   30631 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 18:18:04.284559   30631 main.go:141] libmachine: (multinode-051105) Calling .GetSSHHostname
	I0229 18:18:04.287041   30631 main.go:141] libmachine: (multinode-051105) DBG | domain multinode-051105 has defined MAC address 52:54:00:58:1f:e6 in network mk-multinode-051105
	I0229 18:18:04.287393   30631 main.go:141] libmachine: (multinode-051105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:1f:e6", ip: ""} in network mk-multinode-051105: {Iface:virbr1 ExpiryTime:2024-02-29 19:17:58 +0000 UTC Type:0 Mac:52:54:00:58:1f:e6 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:multinode-051105 Clientid:01:52:54:00:58:1f:e6}
	I0229 18:18:04.287430   30631 main.go:141] libmachine: (multinode-051105) DBG | domain multinode-051105 has defined IP address 192.168.39.200 and MAC address 52:54:00:58:1f:e6 in network mk-multinode-051105
	I0229 18:18:04.287577   30631 main.go:141] libmachine: (multinode-051105) Calling .GetSSHPort
	I0229 18:18:04.287745   30631 main.go:141] libmachine: (multinode-051105) Calling .GetSSHKeyPath
	I0229 18:18:04.287909   30631 main.go:141] libmachine: (multinode-051105) Calling .GetSSHUsername
	I0229 18:18:04.288067   30631 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/multinode-051105/id_rsa Username:docker}
	I0229 18:18:04.378837   30631 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 18:18:04.383528   30631 command_runner.go:130] > NAME=Buildroot
	I0229 18:18:04.383550   30631 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0229 18:18:04.383557   30631 command_runner.go:130] > ID=buildroot
	I0229 18:18:04.383565   30631 command_runner.go:130] > VERSION_ID=2023.02.9
	I0229 18:18:04.383573   30631 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0229 18:18:04.383619   30631 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 18:18:04.383638   30631 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6428/.minikube/addons for local assets ...
	I0229 18:18:04.383702   30631 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6428/.minikube/files for local assets ...
	I0229 18:18:04.383803   30631 filesync.go:149] local asset: /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem -> 136512.pem in /etc/ssl/certs
	I0229 18:18:04.383815   30631 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem -> /etc/ssl/certs/136512.pem
	I0229 18:18:04.383918   30631 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 18:18:04.394901   30631 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem --> /etc/ssl/certs/136512.pem (1708 bytes)
	I0229 18:18:04.420637   30631 start.go:303] post-start completed in 136.452878ms
	I0229 18:18:04.420661   30631 fix.go:56] fixHost completed within 18.31389501s
	I0229 18:18:04.420686   30631 main.go:141] libmachine: (multinode-051105) Calling .GetSSHHostname
	I0229 18:18:04.423299   30631 main.go:141] libmachine: (multinode-051105) DBG | domain multinode-051105 has defined MAC address 52:54:00:58:1f:e6 in network mk-multinode-051105
	I0229 18:18:04.423622   30631 main.go:141] libmachine: (multinode-051105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:1f:e6", ip: ""} in network mk-multinode-051105: {Iface:virbr1 ExpiryTime:2024-02-29 19:17:58 +0000 UTC Type:0 Mac:52:54:00:58:1f:e6 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:multinode-051105 Clientid:01:52:54:00:58:1f:e6}
	I0229 18:18:04.423649   30631 main.go:141] libmachine: (multinode-051105) DBG | domain multinode-051105 has defined IP address 192.168.39.200 and MAC address 52:54:00:58:1f:e6 in network mk-multinode-051105
	I0229 18:18:04.423815   30631 main.go:141] libmachine: (multinode-051105) Calling .GetSSHPort
	I0229 18:18:04.424004   30631 main.go:141] libmachine: (multinode-051105) Calling .GetSSHKeyPath
	I0229 18:18:04.424132   30631 main.go:141] libmachine: (multinode-051105) Calling .GetSSHKeyPath
	I0229 18:18:04.424260   30631 main.go:141] libmachine: (multinode-051105) Calling .GetSSHUsername
	I0229 18:18:04.424412   30631 main.go:141] libmachine: Using SSH client type: native
	I0229 18:18:04.424570   30631 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0229 18:18:04.424580   30631 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 18:18:04.532101   30631 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709230684.486968119
	
	I0229 18:18:04.532123   30631 fix.go:206] guest clock: 1709230684.486968119
	I0229 18:18:04.532133   30631 fix.go:219] Guest: 2024-02-29 18:18:04.486968119 +0000 UTC Remote: 2024-02-29 18:18:04.420666068 +0000 UTC m=+316.341919449 (delta=66.302051ms)
	I0229 18:18:04.532173   30631 fix.go:190] guest clock delta is within tolerance: 66.302051ms
	I0229 18:18:04.532180   30631 start.go:83] releasing machines lock for "multinode-051105", held for 18.425433912s
	I0229 18:18:04.532217   30631 main.go:141] libmachine: (multinode-051105) Calling .DriverName
	I0229 18:18:04.532462   30631 main.go:141] libmachine: (multinode-051105) Calling .GetIP
	I0229 18:18:04.535149   30631 main.go:141] libmachine: (multinode-051105) DBG | domain multinode-051105 has defined MAC address 52:54:00:58:1f:e6 in network mk-multinode-051105
	I0229 18:18:04.535550   30631 main.go:141] libmachine: (multinode-051105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:1f:e6", ip: ""} in network mk-multinode-051105: {Iface:virbr1 ExpiryTime:2024-02-29 19:17:58 +0000 UTC Type:0 Mac:52:54:00:58:1f:e6 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:multinode-051105 Clientid:01:52:54:00:58:1f:e6}
	I0229 18:18:04.535573   30631 main.go:141] libmachine: (multinode-051105) DBG | domain multinode-051105 has defined IP address 192.168.39.200 and MAC address 52:54:00:58:1f:e6 in network mk-multinode-051105
	I0229 18:18:04.535771   30631 main.go:141] libmachine: (multinode-051105) Calling .DriverName
	I0229 18:18:04.536244   30631 main.go:141] libmachine: (multinode-051105) Calling .DriverName
	I0229 18:18:04.536427   30631 main.go:141] libmachine: (multinode-051105) Calling .DriverName
	I0229 18:18:04.536509   30631 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 18:18:04.536561   30631 main.go:141] libmachine: (multinode-051105) Calling .GetSSHHostname
	I0229 18:18:04.536636   30631 ssh_runner.go:195] Run: cat /version.json
	I0229 18:18:04.536661   30631 main.go:141] libmachine: (multinode-051105) Calling .GetSSHHostname
	I0229 18:18:04.539305   30631 main.go:141] libmachine: (multinode-051105) DBG | domain multinode-051105 has defined MAC address 52:54:00:58:1f:e6 in network mk-multinode-051105
	I0229 18:18:04.539392   30631 main.go:141] libmachine: (multinode-051105) DBG | domain multinode-051105 has defined MAC address 52:54:00:58:1f:e6 in network mk-multinode-051105
	I0229 18:18:04.539652   30631 main.go:141] libmachine: (multinode-051105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:1f:e6", ip: ""} in network mk-multinode-051105: {Iface:virbr1 ExpiryTime:2024-02-29 19:17:58 +0000 UTC Type:0 Mac:52:54:00:58:1f:e6 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:multinode-051105 Clientid:01:52:54:00:58:1f:e6}
	I0229 18:18:04.539677   30631 main.go:141] libmachine: (multinode-051105) DBG | domain multinode-051105 has defined IP address 192.168.39.200 and MAC address 52:54:00:58:1f:e6 in network mk-multinode-051105
	I0229 18:18:04.539702   30631 main.go:141] libmachine: (multinode-051105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:1f:e6", ip: ""} in network mk-multinode-051105: {Iface:virbr1 ExpiryTime:2024-02-29 19:17:58 +0000 UTC Type:0 Mac:52:54:00:58:1f:e6 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:multinode-051105 Clientid:01:52:54:00:58:1f:e6}
	I0229 18:18:04.539719   30631 main.go:141] libmachine: (multinode-051105) DBG | domain multinode-051105 has defined IP address 192.168.39.200 and MAC address 52:54:00:58:1f:e6 in network mk-multinode-051105
	I0229 18:18:04.539797   30631 main.go:141] libmachine: (multinode-051105) Calling .GetSSHPort
	I0229 18:18:04.539969   30631 main.go:141] libmachine: (multinode-051105) Calling .GetSSHPort
	I0229 18:18:04.539987   30631 main.go:141] libmachine: (multinode-051105) Calling .GetSSHKeyPath
	I0229 18:18:04.540144   30631 main.go:141] libmachine: (multinode-051105) Calling .GetSSHUsername
	I0229 18:18:04.540149   30631 main.go:141] libmachine: (multinode-051105) Calling .GetSSHKeyPath
	I0229 18:18:04.540296   30631 main.go:141] libmachine: (multinode-051105) Calling .GetSSHUsername
	I0229 18:18:04.540293   30631 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/multinode-051105/id_rsa Username:docker}
	I0229 18:18:04.540437   30631 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/multinode-051105/id_rsa Username:docker}
	I0229 18:18:04.642206   30631 command_runner.go:130] > {"iso_version": "v1.32.1-1708638130-18020", "kicbase_version": "v0.0.42-1708008208-17936", "minikube_version": "v1.32.0", "commit": "d80143d2abd5a004b09b48bbc118a104326900af"}
	I0229 18:18:04.642287   30631 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0229 18:18:04.642368   30631 ssh_runner.go:195] Run: systemctl --version
	I0229 18:18:04.648482   30631 command_runner.go:130] > systemd 252 (252)
	I0229 18:18:04.648513   30631 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0229 18:18:04.648757   30631 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0229 18:18:04.801514   30631 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0229 18:18:04.809408   30631 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0229 18:18:04.809624   30631 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 18:18:04.809682   30631 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 18:18:04.828078   30631 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0229 18:18:04.828129   30631 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 18:18:04.828141   30631 start.go:475] detecting cgroup driver to use...
	I0229 18:18:04.828190   30631 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 18:18:04.847404   30631 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 18:18:04.862017   30631 docker.go:217] disabling cri-docker service (if available) ...
	I0229 18:18:04.862080   30631 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 18:18:04.876887   30631 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 18:18:04.891202   30631 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 18:18:05.007215   30631 command_runner.go:130] ! Removed "/etc/systemd/system/sockets.target.wants/cri-docker.socket".
	I0229 18:18:05.007307   30631 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 18:18:05.023819   30631 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0229 18:18:05.183518   30631 docker.go:233] disabling docker service ...
	I0229 18:18:05.183575   30631 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 18:18:05.199274   30631 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 18:18:05.213145   30631 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I0229 18:18:05.213227   30631 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 18:18:05.228591   30631 command_runner.go:130] ! Removed "/etc/systemd/system/sockets.target.wants/docker.socket".
	I0229 18:18:05.348037   30631 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 18:18:05.470120   30631 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I0229 18:18:05.470146   30631 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0229 18:18:05.470202   30631 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 18:18:05.486233   30631 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 18:18:05.506719   30631 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0229 18:18:05.506755   30631 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0229 18:18:05.506794   30631 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:18:05.519376   30631 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0229 18:18:05.519448   30631 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:18:05.531548   30631 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:18:05.543470   30631 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:18:05.555524   30631 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 18:18:05.567510   30631 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 18:18:05.578149   30631 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 18:18:05.578182   30631 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 18:18:05.578223   30631 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0229 18:18:05.593180   30631 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 18:18:05.604055   30631 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:18:05.718986   30631 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0229 18:18:05.856283   30631 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0229 18:18:05.856356   30631 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0229 18:18:05.861715   30631 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0229 18:18:05.861732   30631 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0229 18:18:05.861739   30631 command_runner.go:130] > Device: 0,22	Inode: 810         Links: 1
	I0229 18:18:05.861747   30631 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0229 18:18:05.861753   30631 command_runner.go:130] > Access: 2024-02-29 18:18:05.803300258 +0000
	I0229 18:18:05.861758   30631 command_runner.go:130] > Modify: 2024-02-29 18:18:05.803300258 +0000
	I0229 18:18:05.861776   30631 command_runner.go:130] > Change: 2024-02-29 18:18:05.803300258 +0000
	I0229 18:18:05.861782   30631 command_runner.go:130] >  Birth: -
	I0229 18:18:05.861925   30631 start.go:543] Will wait 60s for crictl version
	I0229 18:18:05.861993   30631 ssh_runner.go:195] Run: which crictl
	I0229 18:18:05.866013   30631 command_runner.go:130] > /usr/bin/crictl
	I0229 18:18:05.866154   30631 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 18:18:05.909882   30631 command_runner.go:130] > Version:  0.1.0
	I0229 18:18:05.909901   30631 command_runner.go:130] > RuntimeName:  cri-o
	I0229 18:18:05.909908   30631 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0229 18:18:05.909915   30631 command_runner.go:130] > RuntimeApiVersion:  v1
	I0229 18:18:05.910013   30631 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0229 18:18:05.910090   30631 ssh_runner.go:195] Run: crio --version
	I0229 18:18:05.940640   30631 command_runner.go:130] > crio version 1.29.1
	I0229 18:18:05.940660   30631 command_runner.go:130] > Version:        1.29.1
	I0229 18:18:05.940667   30631 command_runner.go:130] > GitCommit:      unknown
	I0229 18:18:05.940673   30631 command_runner.go:130] > GitCommitDate:  unknown
	I0229 18:18:05.940680   30631 command_runner.go:130] > GitTreeState:   clean
	I0229 18:18:05.940693   30631 command_runner.go:130] > BuildDate:      2024-02-23T03:27:48Z
	I0229 18:18:05.940700   30631 command_runner.go:130] > GoVersion:      go1.21.6
	I0229 18:18:05.940709   30631 command_runner.go:130] > Compiler:       gc
	I0229 18:18:05.940722   30631 command_runner.go:130] > Platform:       linux/amd64
	I0229 18:18:05.940739   30631 command_runner.go:130] > Linkmode:       dynamic
	I0229 18:18:05.940749   30631 command_runner.go:130] > BuildTags:      
	I0229 18:18:05.940775   30631 command_runner.go:130] >   containers_image_ostree_stub
	I0229 18:18:05.940781   30631 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0229 18:18:05.940787   30631 command_runner.go:130] >   btrfs_noversion
	I0229 18:18:05.940791   30631 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0229 18:18:05.940795   30631 command_runner.go:130] >   libdm_no_deferred_remove
	I0229 18:18:05.940799   30631 command_runner.go:130] >   seccomp
	I0229 18:18:05.940802   30631 command_runner.go:130] > LDFlags:          unknown
	I0229 18:18:05.940806   30631 command_runner.go:130] > SeccompEnabled:   true
	I0229 18:18:05.940810   30631 command_runner.go:130] > AppArmorEnabled:  false
	I0229 18:18:05.940876   30631 ssh_runner.go:195] Run: crio --version
	I0229 18:18:05.971821   30631 command_runner.go:130] > crio version 1.29.1
	I0229 18:18:05.971841   30631 command_runner.go:130] > Version:        1.29.1
	I0229 18:18:05.971849   30631 command_runner.go:130] > GitCommit:      unknown
	I0229 18:18:05.971855   30631 command_runner.go:130] > GitCommitDate:  unknown
	I0229 18:18:05.971861   30631 command_runner.go:130] > GitTreeState:   clean
	I0229 18:18:05.971868   30631 command_runner.go:130] > BuildDate:      2024-02-23T03:27:48Z
	I0229 18:18:05.971874   30631 command_runner.go:130] > GoVersion:      go1.21.6
	I0229 18:18:05.971880   30631 command_runner.go:130] > Compiler:       gc
	I0229 18:18:05.971886   30631 command_runner.go:130] > Platform:       linux/amd64
	I0229 18:18:05.971894   30631 command_runner.go:130] > Linkmode:       dynamic
	I0229 18:18:05.971910   30631 command_runner.go:130] > BuildTags:      
	I0229 18:18:05.971922   30631 command_runner.go:130] >   containers_image_ostree_stub
	I0229 18:18:05.971933   30631 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0229 18:18:05.971942   30631 command_runner.go:130] >   btrfs_noversion
	I0229 18:18:05.971950   30631 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0229 18:18:05.971961   30631 command_runner.go:130] >   libdm_no_deferred_remove
	I0229 18:18:05.971970   30631 command_runner.go:130] >   seccomp
	I0229 18:18:05.971978   30631 command_runner.go:130] > LDFlags:          unknown
	I0229 18:18:05.971988   30631 command_runner.go:130] > SeccompEnabled:   true
	I0229 18:18:05.971997   30631 command_runner.go:130] > AppArmorEnabled:  false
	I0229 18:18:05.974168   30631 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0229 18:18:05.975575   30631 main.go:141] libmachine: (multinode-051105) Calling .GetIP
	I0229 18:18:05.978402   30631 main.go:141] libmachine: (multinode-051105) DBG | domain multinode-051105 has defined MAC address 52:54:00:58:1f:e6 in network mk-multinode-051105
	I0229 18:18:05.978727   30631 main.go:141] libmachine: (multinode-051105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:1f:e6", ip: ""} in network mk-multinode-051105: {Iface:virbr1 ExpiryTime:2024-02-29 19:17:58 +0000 UTC Type:0 Mac:52:54:00:58:1f:e6 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:multinode-051105 Clientid:01:52:54:00:58:1f:e6}
	I0229 18:18:05.978746   30631 main.go:141] libmachine: (multinode-051105) DBG | domain multinode-051105 has defined IP address 192.168.39.200 and MAC address 52:54:00:58:1f:e6 in network mk-multinode-051105
	I0229 18:18:05.978941   30631 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0229 18:18:05.983311   30631 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:18:05.996640   30631 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0229 18:18:05.996697   30631 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 18:18:06.031413   30631 command_runner.go:130] > {
	I0229 18:18:06.031433   30631 command_runner.go:130] >   "images": [
	I0229 18:18:06.031438   30631 command_runner.go:130] >     {
	I0229 18:18:06.031450   30631 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I0229 18:18:06.031455   30631 command_runner.go:130] >       "repoTags": [
	I0229 18:18:06.031482   30631 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0229 18:18:06.031491   30631 command_runner.go:130] >       ],
	I0229 18:18:06.031498   30631 command_runner.go:130] >       "repoDigests": [
	I0229 18:18:06.031513   30631 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0229 18:18:06.031529   30631 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I0229 18:18:06.031536   30631 command_runner.go:130] >       ],
	I0229 18:18:06.031544   30631 command_runner.go:130] >       "size": "65258016",
	I0229 18:18:06.031551   30631 command_runner.go:130] >       "uid": null,
	I0229 18:18:06.031557   30631 command_runner.go:130] >       "username": "",
	I0229 18:18:06.031566   30631 command_runner.go:130] >       "spec": null,
	I0229 18:18:06.031573   30631 command_runner.go:130] >       "pinned": false
	I0229 18:18:06.031580   30631 command_runner.go:130] >     },
	I0229 18:18:06.031586   30631 command_runner.go:130] >     {
	I0229 18:18:06.031597   30631 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0229 18:18:06.031604   30631 command_runner.go:130] >       "repoTags": [
	I0229 18:18:06.031613   30631 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0229 18:18:06.031622   30631 command_runner.go:130] >       ],
	I0229 18:18:06.031629   30631 command_runner.go:130] >       "repoDigests": [
	I0229 18:18:06.031641   30631 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0229 18:18:06.031656   30631 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0229 18:18:06.031662   30631 command_runner.go:130] >       ],
	I0229 18:18:06.031670   30631 command_runner.go:130] >       "size": "750414",
	I0229 18:18:06.031677   30631 command_runner.go:130] >       "uid": {
	I0229 18:18:06.031687   30631 command_runner.go:130] >         "value": "65535"
	I0229 18:18:06.031702   30631 command_runner.go:130] >       },
	I0229 18:18:06.031713   30631 command_runner.go:130] >       "username": "",
	I0229 18:18:06.031721   30631 command_runner.go:130] >       "spec": null,
	I0229 18:18:06.031730   30631 command_runner.go:130] >       "pinned": true
	I0229 18:18:06.031736   30631 command_runner.go:130] >     }
	I0229 18:18:06.031744   30631 command_runner.go:130] >   ]
	I0229 18:18:06.031750   30631 command_runner.go:130] > }
	I0229 18:18:06.032871   30631 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0229 18:18:06.032947   30631 ssh_runner.go:195] Run: which lz4
	I0229 18:18:06.037206   30631 command_runner.go:130] > /usr/bin/lz4
	I0229 18:18:06.037332   30631 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0229 18:18:06.037420   30631 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0229 18:18:06.041733   30631 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 18:18:06.041876   30631 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 18:18:06.041903   30631 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0229 18:18:07.825029   30631 crio.go:444] Took 1.787638 seconds to copy over tarball
	I0229 18:18:07.825099   30631 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 18:18:10.653256   30631 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.828134473s)
	I0229 18:18:10.653281   30631 crio.go:451] Took 2.828228 seconds to extract the tarball
	I0229 18:18:10.653292   30631 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 18:18:10.695947   30631 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 18:18:10.739311   30631 command_runner.go:130] > {
	I0229 18:18:10.739332   30631 command_runner.go:130] >   "images": [
	I0229 18:18:10.739338   30631 command_runner.go:130] >     {
	I0229 18:18:10.739349   30631 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I0229 18:18:10.739355   30631 command_runner.go:130] >       "repoTags": [
	I0229 18:18:10.739362   30631 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0229 18:18:10.739366   30631 command_runner.go:130] >       ],
	I0229 18:18:10.739371   30631 command_runner.go:130] >       "repoDigests": [
	I0229 18:18:10.739385   30631 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0229 18:18:10.739397   30631 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I0229 18:18:10.739403   30631 command_runner.go:130] >       ],
	I0229 18:18:10.739412   30631 command_runner.go:130] >       "size": "65258016",
	I0229 18:18:10.739429   30631 command_runner.go:130] >       "uid": null,
	I0229 18:18:10.739435   30631 command_runner.go:130] >       "username": "",
	I0229 18:18:10.739443   30631 command_runner.go:130] >       "spec": null,
	I0229 18:18:10.739450   30631 command_runner.go:130] >       "pinned": false
	I0229 18:18:10.739457   30631 command_runner.go:130] >     },
	I0229 18:18:10.739462   30631 command_runner.go:130] >     {
	I0229 18:18:10.739493   30631 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0229 18:18:10.739503   30631 command_runner.go:130] >       "repoTags": [
	I0229 18:18:10.739512   30631 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0229 18:18:10.739517   30631 command_runner.go:130] >       ],
	I0229 18:18:10.739524   30631 command_runner.go:130] >       "repoDigests": [
	I0229 18:18:10.739540   30631 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0229 18:18:10.739555   30631 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0229 18:18:10.739565   30631 command_runner.go:130] >       ],
	I0229 18:18:10.739579   30631 command_runner.go:130] >       "size": "31470524",
	I0229 18:18:10.739589   30631 command_runner.go:130] >       "uid": null,
	I0229 18:18:10.739597   30631 command_runner.go:130] >       "username": "",
	I0229 18:18:10.739606   30631 command_runner.go:130] >       "spec": null,
	I0229 18:18:10.739613   30631 command_runner.go:130] >       "pinned": false
	I0229 18:18:10.739621   30631 command_runner.go:130] >     },
	I0229 18:18:10.739628   30631 command_runner.go:130] >     {
	I0229 18:18:10.739641   30631 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0229 18:18:10.739648   30631 command_runner.go:130] >       "repoTags": [
	I0229 18:18:10.739660   30631 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0229 18:18:10.739677   30631 command_runner.go:130] >       ],
	I0229 18:18:10.739687   30631 command_runner.go:130] >       "repoDigests": [
	I0229 18:18:10.739701   30631 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0229 18:18:10.739717   30631 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0229 18:18:10.739726   30631 command_runner.go:130] >       ],
	I0229 18:18:10.739734   30631 command_runner.go:130] >       "size": "53621675",
	I0229 18:18:10.739747   30631 command_runner.go:130] >       "uid": null,
	I0229 18:18:10.739757   30631 command_runner.go:130] >       "username": "",
	I0229 18:18:10.739766   30631 command_runner.go:130] >       "spec": null,
	I0229 18:18:10.739774   30631 command_runner.go:130] >       "pinned": false
	I0229 18:18:10.739783   30631 command_runner.go:130] >     },
	I0229 18:18:10.739790   30631 command_runner.go:130] >     {
	I0229 18:18:10.739803   30631 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0229 18:18:10.739812   30631 command_runner.go:130] >       "repoTags": [
	I0229 18:18:10.739821   30631 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0229 18:18:10.739829   30631 command_runner.go:130] >       ],
	I0229 18:18:10.739837   30631 command_runner.go:130] >       "repoDigests": [
	I0229 18:18:10.739851   30631 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0229 18:18:10.739872   30631 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0229 18:18:10.739889   30631 command_runner.go:130] >       ],
	I0229 18:18:10.739899   30631 command_runner.go:130] >       "size": "295456551",
	I0229 18:18:10.739906   30631 command_runner.go:130] >       "uid": {
	I0229 18:18:10.739913   30631 command_runner.go:130] >         "value": "0"
	I0229 18:18:10.739923   30631 command_runner.go:130] >       },
	I0229 18:18:10.739940   30631 command_runner.go:130] >       "username": "",
	I0229 18:18:10.739949   30631 command_runner.go:130] >       "spec": null,
	I0229 18:18:10.739957   30631 command_runner.go:130] >       "pinned": false
	I0229 18:18:10.739965   30631 command_runner.go:130] >     },
	I0229 18:18:10.739971   30631 command_runner.go:130] >     {
	I0229 18:18:10.739983   30631 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I0229 18:18:10.739992   30631 command_runner.go:130] >       "repoTags": [
	I0229 18:18:10.740003   30631 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0229 18:18:10.740012   30631 command_runner.go:130] >       ],
	I0229 18:18:10.740019   30631 command_runner.go:130] >       "repoDigests": [
	I0229 18:18:10.740034   30631 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I0229 18:18:10.740050   30631 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I0229 18:18:10.740058   30631 command_runner.go:130] >       ],
	I0229 18:18:10.740067   30631 command_runner.go:130] >       "size": "127226832",
	I0229 18:18:10.740075   30631 command_runner.go:130] >       "uid": {
	I0229 18:18:10.740083   30631 command_runner.go:130] >         "value": "0"
	I0229 18:18:10.740089   30631 command_runner.go:130] >       },
	I0229 18:18:10.740097   30631 command_runner.go:130] >       "username": "",
	I0229 18:18:10.740108   30631 command_runner.go:130] >       "spec": null,
	I0229 18:18:10.740116   30631 command_runner.go:130] >       "pinned": false
	I0229 18:18:10.740123   30631 command_runner.go:130] >     },
	I0229 18:18:10.740132   30631 command_runner.go:130] >     {
	I0229 18:18:10.740143   30631 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I0229 18:18:10.740152   30631 command_runner.go:130] >       "repoTags": [
	I0229 18:18:10.740161   30631 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0229 18:18:10.740170   30631 command_runner.go:130] >       ],
	I0229 18:18:10.740178   30631 command_runner.go:130] >       "repoDigests": [
	I0229 18:18:10.740194   30631 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0229 18:18:10.740211   30631 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I0229 18:18:10.740220   30631 command_runner.go:130] >       ],
	I0229 18:18:10.740234   30631 command_runner.go:130] >       "size": "123261750",
	I0229 18:18:10.740243   30631 command_runner.go:130] >       "uid": {
	I0229 18:18:10.740249   30631 command_runner.go:130] >         "value": "0"
	I0229 18:18:10.740257   30631 command_runner.go:130] >       },
	I0229 18:18:10.740267   30631 command_runner.go:130] >       "username": "",
	I0229 18:18:10.740275   30631 command_runner.go:130] >       "spec": null,
	I0229 18:18:10.740285   30631 command_runner.go:130] >       "pinned": false
	I0229 18:18:10.740291   30631 command_runner.go:130] >     },
	I0229 18:18:10.740297   30631 command_runner.go:130] >     {
	I0229 18:18:10.740308   30631 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I0229 18:18:10.740317   30631 command_runner.go:130] >       "repoTags": [
	I0229 18:18:10.740326   30631 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0229 18:18:10.740334   30631 command_runner.go:130] >       ],
	I0229 18:18:10.740341   30631 command_runner.go:130] >       "repoDigests": [
	I0229 18:18:10.740357   30631 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I0229 18:18:10.740373   30631 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0229 18:18:10.740381   30631 command_runner.go:130] >       ],
	I0229 18:18:10.740389   30631 command_runner.go:130] >       "size": "74749335",
	I0229 18:18:10.740398   30631 command_runner.go:130] >       "uid": null,
	I0229 18:18:10.740407   30631 command_runner.go:130] >       "username": "",
	I0229 18:18:10.740416   30631 command_runner.go:130] >       "spec": null,
	I0229 18:18:10.740424   30631 command_runner.go:130] >       "pinned": false
	I0229 18:18:10.740432   30631 command_runner.go:130] >     },
	I0229 18:18:10.740438   30631 command_runner.go:130] >     {
	I0229 18:18:10.740449   30631 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I0229 18:18:10.740459   30631 command_runner.go:130] >       "repoTags": [
	I0229 18:18:10.740469   30631 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0229 18:18:10.740477   30631 command_runner.go:130] >       ],
	I0229 18:18:10.740485   30631 command_runner.go:130] >       "repoDigests": [
	I0229 18:18:10.740517   30631 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0229 18:18:10.740540   30631 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I0229 18:18:10.740545   30631 command_runner.go:130] >       ],
	I0229 18:18:10.740552   30631 command_runner.go:130] >       "size": "61551410",
	I0229 18:18:10.740559   30631 command_runner.go:130] >       "uid": {
	I0229 18:18:10.740566   30631 command_runner.go:130] >         "value": "0"
	I0229 18:18:10.740578   30631 command_runner.go:130] >       },
	I0229 18:18:10.740592   30631 command_runner.go:130] >       "username": "",
	I0229 18:18:10.740603   30631 command_runner.go:130] >       "spec": null,
	I0229 18:18:10.740612   30631 command_runner.go:130] >       "pinned": false
	I0229 18:18:10.740619   30631 command_runner.go:130] >     },
	I0229 18:18:10.740625   30631 command_runner.go:130] >     {
	I0229 18:18:10.740636   30631 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0229 18:18:10.740646   30631 command_runner.go:130] >       "repoTags": [
	I0229 18:18:10.740654   30631 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0229 18:18:10.740663   30631 command_runner.go:130] >       ],
	I0229 18:18:10.740671   30631 command_runner.go:130] >       "repoDigests": [
	I0229 18:18:10.740686   30631 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0229 18:18:10.740701   30631 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0229 18:18:10.740709   30631 command_runner.go:130] >       ],
	I0229 18:18:10.740717   30631 command_runner.go:130] >       "size": "750414",
	I0229 18:18:10.740725   30631 command_runner.go:130] >       "uid": {
	I0229 18:18:10.740732   30631 command_runner.go:130] >         "value": "65535"
	I0229 18:18:10.740740   30631 command_runner.go:130] >       },
	I0229 18:18:10.740748   30631 command_runner.go:130] >       "username": "",
	I0229 18:18:10.740758   30631 command_runner.go:130] >       "spec": null,
	I0229 18:18:10.740768   30631 command_runner.go:130] >       "pinned": true
	I0229 18:18:10.740774   30631 command_runner.go:130] >     }
	I0229 18:18:10.740780   30631 command_runner.go:130] >   ]
	I0229 18:18:10.740787   30631 command_runner.go:130] > }
	I0229 18:18:10.741135   30631 crio.go:496] all images are preloaded for cri-o runtime.
	I0229 18:18:10.741156   30631 cache_images.go:84] Images are preloaded, skipping loading
	I0229 18:18:10.741226   30631 ssh_runner.go:195] Run: crio config
	I0229 18:18:10.781335   30631 command_runner.go:130] ! time="2024-02-29 18:18:10.740042442Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0229 18:18:10.786556   30631 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0229 18:18:10.794347   30631 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0229 18:18:10.794369   30631 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0229 18:18:10.794379   30631 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0229 18:18:10.794384   30631 command_runner.go:130] > #
	I0229 18:18:10.794400   30631 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0229 18:18:10.794411   30631 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0229 18:18:10.794430   30631 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0229 18:18:10.794444   30631 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0229 18:18:10.794465   30631 command_runner.go:130] > # reload'.
	I0229 18:18:10.794476   30631 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0229 18:18:10.794494   30631 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0229 18:18:10.794503   30631 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0229 18:18:10.794512   30631 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0229 18:18:10.794521   30631 command_runner.go:130] > [crio]
	I0229 18:18:10.794531   30631 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0229 18:18:10.794542   30631 command_runner.go:130] > # containers images, in this directory.
	I0229 18:18:10.794553   30631 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0229 18:18:10.794569   30631 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0229 18:18:10.794577   30631 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0229 18:18:10.794585   30631 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0229 18:18:10.794592   30631 command_runner.go:130] > # imagestore = ""
	I0229 18:18:10.794598   30631 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0229 18:18:10.794608   30631 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0229 18:18:10.794619   30631 command_runner.go:130] > storage_driver = "overlay"
	I0229 18:18:10.794629   30631 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0229 18:18:10.794642   30631 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0229 18:18:10.794651   30631 command_runner.go:130] > storage_option = [
	I0229 18:18:10.794661   30631 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0229 18:18:10.794670   30631 command_runner.go:130] > ]
	I0229 18:18:10.794681   30631 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0229 18:18:10.794690   30631 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0229 18:18:10.794697   30631 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0229 18:18:10.794708   30631 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0229 18:18:10.794721   30631 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0229 18:18:10.794731   30631 command_runner.go:130] > # always happen on a node reboot
	I0229 18:18:10.794742   30631 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0229 18:18:10.794765   30631 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0229 18:18:10.794779   30631 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0229 18:18:10.794785   30631 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0229 18:18:10.794793   30631 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0229 18:18:10.794808   30631 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0229 18:18:10.794824   30631 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0229 18:18:10.794834   30631 command_runner.go:130] > # internal_wipe = true
	I0229 18:18:10.794850   30631 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0229 18:18:10.794871   30631 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0229 18:18:10.794880   30631 command_runner.go:130] > # internal_repair = false
	I0229 18:18:10.794892   30631 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0229 18:18:10.794905   30631 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0229 18:18:10.794917   30631 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0229 18:18:10.794928   30631 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0229 18:18:10.794941   30631 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0229 18:18:10.794950   30631 command_runner.go:130] > [crio.api]
	I0229 18:18:10.794960   30631 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0229 18:18:10.794969   30631 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0229 18:18:10.794978   30631 command_runner.go:130] > # IP address on which the stream server will listen.
	I0229 18:18:10.794990   30631 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0229 18:18:10.795004   30631 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0229 18:18:10.795016   30631 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0229 18:18:10.795037   30631 command_runner.go:130] > # stream_port = "0"
	I0229 18:18:10.795049   30631 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0229 18:18:10.795059   30631 command_runner.go:130] > # stream_enable_tls = false
	I0229 18:18:10.795071   30631 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0229 18:18:10.795081   30631 command_runner.go:130] > # stream_idle_timeout = ""
	I0229 18:18:10.795093   30631 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0229 18:18:10.795105   30631 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0229 18:18:10.795115   30631 command_runner.go:130] > # minutes.
	I0229 18:18:10.795122   30631 command_runner.go:130] > # stream_tls_cert = ""
	I0229 18:18:10.795135   30631 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0229 18:18:10.795148   30631 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0229 18:18:10.795157   30631 command_runner.go:130] > # stream_tls_key = ""
	I0229 18:18:10.795169   30631 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0229 18:18:10.795181   30631 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0229 18:18:10.795208   30631 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0229 18:18:10.795219   30631 command_runner.go:130] > # stream_tls_ca = ""
	I0229 18:18:10.795231   30631 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0229 18:18:10.795241   30631 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0229 18:18:10.795255   30631 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0229 18:18:10.795265   30631 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0229 18:18:10.795276   30631 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0229 18:18:10.795287   30631 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0229 18:18:10.795307   30631 command_runner.go:130] > [crio.runtime]
	I0229 18:18:10.795320   30631 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0229 18:18:10.795331   30631 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0229 18:18:10.795344   30631 command_runner.go:130] > # "nofile=1024:2048"
	I0229 18:18:10.795356   30631 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0229 18:18:10.795363   30631 command_runner.go:130] > # default_ulimits = [
	I0229 18:18:10.795367   30631 command_runner.go:130] > # ]
	I0229 18:18:10.795375   30631 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0229 18:18:10.795385   30631 command_runner.go:130] > # no_pivot = false
	I0229 18:18:10.795397   30631 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0229 18:18:10.795411   30631 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0229 18:18:10.795421   30631 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0229 18:18:10.795435   30631 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0229 18:18:10.795445   30631 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0229 18:18:10.795456   30631 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0229 18:18:10.795465   30631 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0229 18:18:10.795476   30631 command_runner.go:130] > # Cgroup setting for conmon
	I0229 18:18:10.795494   30631 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0229 18:18:10.795504   30631 command_runner.go:130] > conmon_cgroup = "pod"
	I0229 18:18:10.795517   30631 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0229 18:18:10.795532   30631 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0229 18:18:10.795544   30631 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0229 18:18:10.795551   30631 command_runner.go:130] > conmon_env = [
	I0229 18:18:10.795564   30631 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0229 18:18:10.795572   30631 command_runner.go:130] > ]
	I0229 18:18:10.795584   30631 command_runner.go:130] > # Additional environment variables to set for all the
	I0229 18:18:10.795594   30631 command_runner.go:130] > # containers. These are overridden if set in the
	I0229 18:18:10.795607   30631 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0229 18:18:10.795613   30631 command_runner.go:130] > # default_env = [
	I0229 18:18:10.795620   30631 command_runner.go:130] > # ]
	I0229 18:18:10.795629   30631 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0229 18:18:10.795640   30631 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0229 18:18:10.795649   30631 command_runner.go:130] > # selinux = false
	I0229 18:18:10.795663   30631 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0229 18:18:10.795676   30631 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0229 18:18:10.795688   30631 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0229 18:18:10.795703   30631 command_runner.go:130] > # seccomp_profile = ""
	I0229 18:18:10.795715   30631 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0229 18:18:10.795725   30631 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0229 18:18:10.795748   30631 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0229 18:18:10.795760   30631 command_runner.go:130] > # which might increase security.
	I0229 18:18:10.795767   30631 command_runner.go:130] > # This option is currently deprecated,
	I0229 18:18:10.795779   30631 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0229 18:18:10.795789   30631 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0229 18:18:10.795802   30631 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0229 18:18:10.795813   30631 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0229 18:18:10.795826   30631 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0229 18:18:10.795839   30631 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0229 18:18:10.795850   30631 command_runner.go:130] > # This option supports live configuration reload.
	I0229 18:18:10.795860   30631 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0229 18:18:10.795872   30631 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0229 18:18:10.795882   30631 command_runner.go:130] > # the cgroup blockio controller.
	I0229 18:18:10.795891   30631 command_runner.go:130] > # blockio_config_file = ""
	I0229 18:18:10.795903   30631 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0229 18:18:10.795911   30631 command_runner.go:130] > # blockio parameters.
	I0229 18:18:10.795920   30631 command_runner.go:130] > # blockio_reload = false
	I0229 18:18:10.795935   30631 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0229 18:18:10.795945   30631 command_runner.go:130] > # irqbalance daemon.
	I0229 18:18:10.795957   30631 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0229 18:18:10.795969   30631 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0229 18:18:10.795982   30631 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0229 18:18:10.795994   30631 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0229 18:18:10.796006   30631 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0229 18:18:10.796019   30631 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0229 18:18:10.796031   30631 command_runner.go:130] > # This option supports live configuration reload.
	I0229 18:18:10.796040   30631 command_runner.go:130] > # rdt_config_file = ""
	I0229 18:18:10.796049   30631 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0229 18:18:10.796059   30631 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0229 18:18:10.796096   30631 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0229 18:18:10.796108   30631 command_runner.go:130] > # separate_pull_cgroup = ""
	I0229 18:18:10.796118   30631 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0229 18:18:10.796132   30631 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0229 18:18:10.796146   30631 command_runner.go:130] > # will be added.
	I0229 18:18:10.796156   30631 command_runner.go:130] > # default_capabilities = [
	I0229 18:18:10.796165   30631 command_runner.go:130] > # 	"CHOWN",
	I0229 18:18:10.796173   30631 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0229 18:18:10.796178   30631 command_runner.go:130] > # 	"FSETID",
	I0229 18:18:10.796184   30631 command_runner.go:130] > # 	"FOWNER",
	I0229 18:18:10.796193   30631 command_runner.go:130] > # 	"SETGID",
	I0229 18:18:10.796203   30631 command_runner.go:130] > # 	"SETUID",
	I0229 18:18:10.796213   30631 command_runner.go:130] > # 	"SETPCAP",
	I0229 18:18:10.796220   30631 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0229 18:18:10.796228   30631 command_runner.go:130] > # 	"KILL",
	I0229 18:18:10.796234   30631 command_runner.go:130] > # ]
	I0229 18:18:10.796248   30631 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0229 18:18:10.796261   30631 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0229 18:18:10.796268   30631 command_runner.go:130] > # add_inheritable_capabilities = false
	I0229 18:18:10.796281   30631 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0229 18:18:10.796294   30631 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0229 18:18:10.796303   30631 command_runner.go:130] > # default_sysctls = [
	I0229 18:18:10.796309   30631 command_runner.go:130] > # ]
	I0229 18:18:10.796319   30631 command_runner.go:130] > # List of devices on the host that a
	I0229 18:18:10.796332   30631 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0229 18:18:10.796341   30631 command_runner.go:130] > # allowed_devices = [
	I0229 18:18:10.796349   30631 command_runner.go:130] > # 	"/dev/fuse",
	I0229 18:18:10.796356   30631 command_runner.go:130] > # ]
	I0229 18:18:10.796363   30631 command_runner.go:130] > # List of additional devices. specified as
	I0229 18:18:10.796379   30631 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0229 18:18:10.796391   30631 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0229 18:18:10.796403   30631 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0229 18:18:10.796412   30631 command_runner.go:130] > # additional_devices = [
	I0229 18:18:10.796420   30631 command_runner.go:130] > # ]
	I0229 18:18:10.796431   30631 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0229 18:18:10.796439   30631 command_runner.go:130] > # cdi_spec_dirs = [
	I0229 18:18:10.796446   30631 command_runner.go:130] > # 	"/etc/cdi",
	I0229 18:18:10.796452   30631 command_runner.go:130] > # 	"/var/run/cdi",
	I0229 18:18:10.796460   30631 command_runner.go:130] > # ]
	I0229 18:18:10.796474   30631 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0229 18:18:10.796497   30631 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0229 18:18:10.796506   30631 command_runner.go:130] > # Defaults to false.
	I0229 18:18:10.796517   30631 command_runner.go:130] > # device_ownership_from_security_context = false
	I0229 18:18:10.796529   30631 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0229 18:18:10.796539   30631 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0229 18:18:10.796547   30631 command_runner.go:130] > # hooks_dir = [
	I0229 18:18:10.796556   30631 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0229 18:18:10.796564   30631 command_runner.go:130] > # ]
	I0229 18:18:10.796573   30631 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0229 18:18:10.796586   30631 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0229 18:18:10.796597   30631 command_runner.go:130] > # its default mounts from the following two files:
	I0229 18:18:10.796605   30631 command_runner.go:130] > #
	I0229 18:18:10.796618   30631 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0229 18:18:10.796629   30631 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0229 18:18:10.796641   30631 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0229 18:18:10.796649   30631 command_runner.go:130] > #
	I0229 18:18:10.796662   30631 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0229 18:18:10.796675   30631 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0229 18:18:10.796689   30631 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0229 18:18:10.796700   30631 command_runner.go:130] > #      only add mounts it finds in this file.
	I0229 18:18:10.796708   30631 command_runner.go:130] > #
	I0229 18:18:10.796713   30631 command_runner.go:130] > # default_mounts_file = ""
	I0229 18:18:10.796724   30631 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0229 18:18:10.796736   30631 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0229 18:18:10.796746   30631 command_runner.go:130] > pids_limit = 1024
	I0229 18:18:10.796758   30631 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0229 18:18:10.796771   30631 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0229 18:18:10.796784   30631 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0229 18:18:10.796799   30631 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0229 18:18:10.796806   30631 command_runner.go:130] > # log_size_max = -1
	I0229 18:18:10.796817   30631 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0229 18:18:10.796828   30631 command_runner.go:130] > # log_to_journald = false
	I0229 18:18:10.796840   30631 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0229 18:18:10.796852   30631 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0229 18:18:10.796863   30631 command_runner.go:130] > # Path to directory for container attach sockets.
	I0229 18:18:10.796874   30631 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0229 18:18:10.796890   30631 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0229 18:18:10.796899   30631 command_runner.go:130] > # bind_mount_prefix = ""
	I0229 18:18:10.796908   30631 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0229 18:18:10.796918   30631 command_runner.go:130] > # read_only = false
	I0229 18:18:10.796931   30631 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0229 18:18:10.796944   30631 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0229 18:18:10.796954   30631 command_runner.go:130] > # live configuration reload.
	I0229 18:18:10.796963   30631 command_runner.go:130] > # log_level = "info"
	I0229 18:18:10.796975   30631 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0229 18:18:10.796983   30631 command_runner.go:130] > # This option supports live configuration reload.
	I0229 18:18:10.796989   30631 command_runner.go:130] > # log_filter = ""
	I0229 18:18:10.797002   30631 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0229 18:18:10.797018   30631 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0229 18:18:10.797028   30631 command_runner.go:130] > # separated by comma.
	I0229 18:18:10.797043   30631 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0229 18:18:10.797052   30631 command_runner.go:130] > # uid_mappings = ""
	I0229 18:18:10.797064   30631 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0229 18:18:10.797073   30631 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0229 18:18:10.797079   30631 command_runner.go:130] > # separated by comma.
	I0229 18:18:10.797100   30631 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0229 18:18:10.797110   30631 command_runner.go:130] > # gid_mappings = ""
	I0229 18:18:10.797122   30631 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0229 18:18:10.797134   30631 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0229 18:18:10.797148   30631 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0229 18:18:10.797161   30631 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0229 18:18:10.797169   30631 command_runner.go:130] > # minimum_mappable_uid = -1
	I0229 18:18:10.797186   30631 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0229 18:18:10.797198   30631 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0229 18:18:10.797211   30631 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0229 18:18:10.797225   30631 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0229 18:18:10.797235   30631 command_runner.go:130] > # minimum_mappable_gid = -1
	I0229 18:18:10.797247   30631 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0229 18:18:10.797256   30631 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0229 18:18:10.797272   30631 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0229 18:18:10.797282   30631 command_runner.go:130] > # ctr_stop_timeout = 30
	I0229 18:18:10.797291   30631 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0229 18:18:10.797314   30631 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0229 18:18:10.797389   30631 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0229 18:18:10.797421   30631 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0229 18:18:10.797429   30631 command_runner.go:130] > drop_infra_ctr = false
	I0229 18:18:10.797443   30631 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0229 18:18:10.797455   30631 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0229 18:18:10.797470   30631 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0229 18:18:10.797480   30631 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0229 18:18:10.797494   30631 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0229 18:18:10.797507   30631 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0229 18:18:10.797520   30631 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0229 18:18:10.797531   30631 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0229 18:18:10.797542   30631 command_runner.go:130] > # shared_cpuset = ""
	I0229 18:18:10.797558   30631 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0229 18:18:10.797566   30631 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0229 18:18:10.797575   30631 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0229 18:18:10.797591   30631 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0229 18:18:10.797601   30631 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0229 18:18:10.797613   30631 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0229 18:18:10.797626   30631 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0229 18:18:10.797635   30631 command_runner.go:130] > # enable_criu_support = false
	I0229 18:18:10.797647   30631 command_runner.go:130] > # Enable/disable the generation of the container,
	I0229 18:18:10.797656   30631 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0229 18:18:10.797665   30631 command_runner.go:130] > # enable_pod_events = false
	I0229 18:18:10.797679   30631 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0229 18:18:10.797701   30631 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0229 18:18:10.797713   30631 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0229 18:18:10.797722   30631 command_runner.go:130] > # default_runtime = "runc"
	I0229 18:18:10.797734   30631 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0229 18:18:10.797745   30631 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0229 18:18:10.797762   30631 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0229 18:18:10.797773   30631 command_runner.go:130] > # creation as a file is not desired either.
	I0229 18:18:10.797790   30631 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0229 18:18:10.797800   30631 command_runner.go:130] > # the hostname is being managed dynamically.
	I0229 18:18:10.797811   30631 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0229 18:18:10.797819   30631 command_runner.go:130] > # ]
	I0229 18:18:10.797839   30631 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0229 18:18:10.797853   30631 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0229 18:18:10.797866   30631 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0229 18:18:10.797877   30631 command_runner.go:130] > # Each entry in the table should follow the format:
	I0229 18:18:10.797885   30631 command_runner.go:130] > #
	I0229 18:18:10.797893   30631 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0229 18:18:10.797904   30631 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0229 18:18:10.797913   30631 command_runner.go:130] > # runtime_type = "oci"
	I0229 18:18:10.797969   30631 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0229 18:18:10.797981   30631 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0229 18:18:10.797992   30631 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0229 18:18:10.798002   30631 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0229 18:18:10.798009   30631 command_runner.go:130] > # monitor_env = []
	I0229 18:18:10.798014   30631 command_runner.go:130] > # privileged_without_host_devices = false
	I0229 18:18:10.798023   30631 command_runner.go:130] > # allowed_annotations = []
	I0229 18:18:10.798036   30631 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0229 18:18:10.798045   30631 command_runner.go:130] > # Where:
	I0229 18:18:10.798057   30631 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0229 18:18:10.798069   30631 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0229 18:18:10.798082   30631 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0229 18:18:10.798094   30631 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0229 18:18:10.798101   30631 command_runner.go:130] > #   in $PATH.
	I0229 18:18:10.798111   30631 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0229 18:18:10.798122   30631 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0229 18:18:10.798135   30631 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0229 18:18:10.798147   30631 command_runner.go:130] > #   state.
	I0229 18:18:10.798160   30631 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0229 18:18:10.798175   30631 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0229 18:18:10.798186   30631 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0229 18:18:10.798196   30631 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0229 18:18:10.798209   30631 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0229 18:18:10.798223   30631 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0229 18:18:10.798234   30631 command_runner.go:130] > #   The currently recognized values are:
	I0229 18:18:10.798247   30631 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0229 18:18:10.798262   30631 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0229 18:18:10.798286   30631 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0229 18:18:10.798306   30631 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0229 18:18:10.798322   30631 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0229 18:18:10.798335   30631 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0229 18:18:10.798349   30631 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0229 18:18:10.798362   30631 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0229 18:18:10.798372   30631 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0229 18:18:10.798385   30631 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0229 18:18:10.798395   30631 command_runner.go:130] > #   deprecated option "conmon".
	I0229 18:18:10.798409   30631 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0229 18:18:10.798420   30631 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0229 18:18:10.798431   30631 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0229 18:18:10.798442   30631 command_runner.go:130] > #   should be moved to the container's cgroup
	I0229 18:18:10.798455   30631 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0229 18:18:10.798461   30631 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0229 18:18:10.798471   30631 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0229 18:18:10.798479   30631 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0229 18:18:10.798485   30631 command_runner.go:130] > #
	I0229 18:18:10.798492   30631 command_runner.go:130] > # Using the seccomp notifier feature:
	I0229 18:18:10.798496   30631 command_runner.go:130] > #
	I0229 18:18:10.798505   30631 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0229 18:18:10.798516   30631 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0229 18:18:10.798521   30631 command_runner.go:130] > #
	I0229 18:18:10.798530   30631 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0229 18:18:10.798542   30631 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0229 18:18:10.798548   30631 command_runner.go:130] > #
	I0229 18:18:10.798560   30631 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0229 18:18:10.798569   30631 command_runner.go:130] > # feature.
	I0229 18:18:10.798578   30631 command_runner.go:130] > #
	I0229 18:18:10.798587   30631 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0229 18:18:10.798600   30631 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0229 18:18:10.798612   30631 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0229 18:18:10.798625   30631 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0229 18:18:10.798635   30631 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0229 18:18:10.798641   30631 command_runner.go:130] > #
	I0229 18:18:10.798651   30631 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0229 18:18:10.798665   30631 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0229 18:18:10.798679   30631 command_runner.go:130] > #
	I0229 18:18:10.798692   30631 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0229 18:18:10.798703   30631 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0229 18:18:10.798716   30631 command_runner.go:130] > #
	I0229 18:18:10.798730   30631 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0229 18:18:10.798746   30631 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0229 18:18:10.798755   30631 command_runner.go:130] > # limitation.
	I0229 18:18:10.798765   30631 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0229 18:18:10.798778   30631 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0229 18:18:10.798787   30631 command_runner.go:130] > runtime_type = "oci"
	I0229 18:18:10.798797   30631 command_runner.go:130] > runtime_root = "/run/runc"
	I0229 18:18:10.798805   30631 command_runner.go:130] > runtime_config_path = ""
	I0229 18:18:10.798814   30631 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0229 18:18:10.798822   30631 command_runner.go:130] > monitor_cgroup = "pod"
	I0229 18:18:10.798830   30631 command_runner.go:130] > monitor_exec_cgroup = ""
	I0229 18:18:10.798839   30631 command_runner.go:130] > monitor_env = [
	I0229 18:18:10.798851   30631 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0229 18:18:10.798859   30631 command_runner.go:130] > ]
	I0229 18:18:10.798869   30631 command_runner.go:130] > privileged_without_host_devices = false
	I0229 18:18:10.798883   30631 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0229 18:18:10.798894   30631 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0229 18:18:10.798904   30631 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0229 18:18:10.798914   30631 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0229 18:18:10.798923   30631 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0229 18:18:10.798935   30631 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0229 18:18:10.798953   30631 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0229 18:18:10.798969   30631 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0229 18:18:10.798983   30631 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0229 18:18:10.798995   30631 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0229 18:18:10.799002   30631 command_runner.go:130] > # Example:
	I0229 18:18:10.799010   30631 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0229 18:18:10.799029   30631 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0229 18:18:10.799041   30631 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0229 18:18:10.799050   30631 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0229 18:18:10.799058   30631 command_runner.go:130] > # cpuset = 0
	I0229 18:18:10.799068   30631 command_runner.go:130] > # cpushares = "0-1"
	I0229 18:18:10.799082   30631 command_runner.go:130] > # Where:
	I0229 18:18:10.799094   30631 command_runner.go:130] > # The workload name is workload-type.
	I0229 18:18:10.799108   30631 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0229 18:18:10.799120   30631 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0229 18:18:10.799132   30631 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0229 18:18:10.799147   30631 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0229 18:18:10.799156   30631 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0229 18:18:10.799163   30631 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0229 18:18:10.799170   30631 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0229 18:18:10.799176   30631 command_runner.go:130] > # Default value is set to true
	I0229 18:18:10.799181   30631 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0229 18:18:10.799188   30631 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0229 18:18:10.799193   30631 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0229 18:18:10.799199   30631 command_runner.go:130] > # Default value is set to 'false'
	I0229 18:18:10.799204   30631 command_runner.go:130] > # disable_hostport_mapping = false
	I0229 18:18:10.799212   30631 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0229 18:18:10.799215   30631 command_runner.go:130] > #
	I0229 18:18:10.799223   30631 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0229 18:18:10.799231   30631 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0229 18:18:10.799240   30631 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0229 18:18:10.799246   30631 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0229 18:18:10.799253   30631 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0229 18:18:10.799256   30631 command_runner.go:130] > [crio.image]
	I0229 18:18:10.799263   30631 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0229 18:18:10.799274   30631 command_runner.go:130] > # default_transport = "docker://"
	I0229 18:18:10.799282   30631 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0229 18:18:10.799290   30631 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0229 18:18:10.799296   30631 command_runner.go:130] > # global_auth_file = ""
	I0229 18:18:10.799307   30631 command_runner.go:130] > # The image used to instantiate infra containers.
	I0229 18:18:10.799318   30631 command_runner.go:130] > # This option supports live configuration reload.
	I0229 18:18:10.799330   30631 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0229 18:18:10.799339   30631 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0229 18:18:10.799347   30631 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0229 18:18:10.799355   30631 command_runner.go:130] > # This option supports live configuration reload.
	I0229 18:18:10.799359   30631 command_runner.go:130] > # pause_image_auth_file = ""
	I0229 18:18:10.799366   30631 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0229 18:18:10.799377   30631 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0229 18:18:10.799385   30631 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0229 18:18:10.799392   30631 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0229 18:18:10.799398   30631 command_runner.go:130] > # pause_command = "/pause"
	I0229 18:18:10.799404   30631 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0229 18:18:10.799411   30631 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0229 18:18:10.799418   30631 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0229 18:18:10.799425   30631 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0229 18:18:10.799436   30631 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0229 18:18:10.799444   30631 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0229 18:18:10.799451   30631 command_runner.go:130] > # pinned_images = [
	I0229 18:18:10.799454   30631 command_runner.go:130] > # ]
	I0229 18:18:10.799462   30631 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0229 18:18:10.799470   30631 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0229 18:18:10.799476   30631 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0229 18:18:10.799485   30631 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0229 18:18:10.799492   30631 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0229 18:18:10.799497   30631 command_runner.go:130] > # signature_policy = ""
	I0229 18:18:10.799503   30631 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0229 18:18:10.799512   30631 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0229 18:18:10.799520   30631 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0229 18:18:10.799526   30631 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0229 18:18:10.799534   30631 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0229 18:18:10.799541   30631 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0229 18:18:10.799546   30631 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0229 18:18:10.799554   30631 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0229 18:18:10.799558   30631 command_runner.go:130] > # changing them here.
	I0229 18:18:10.799564   30631 command_runner.go:130] > # insecure_registries = [
	I0229 18:18:10.799569   30631 command_runner.go:130] > # ]
	I0229 18:18:10.799577   30631 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0229 18:18:10.799584   30631 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0229 18:18:10.799588   30631 command_runner.go:130] > # image_volumes = "mkdir"
	I0229 18:18:10.799596   30631 command_runner.go:130] > # Temporary directory to use for storing big files
	I0229 18:18:10.799600   30631 command_runner.go:130] > # big_files_temporary_dir = ""
	I0229 18:18:10.799608   30631 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0229 18:18:10.799612   30631 command_runner.go:130] > # CNI plugins.
	I0229 18:18:10.799622   30631 command_runner.go:130] > [crio.network]
	I0229 18:18:10.799630   30631 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0229 18:18:10.799637   30631 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0229 18:18:10.799642   30631 command_runner.go:130] > # cni_default_network = ""
	I0229 18:18:10.799650   30631 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0229 18:18:10.799655   30631 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0229 18:18:10.799660   30631 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0229 18:18:10.799666   30631 command_runner.go:130] > # plugin_dirs = [
	I0229 18:18:10.799670   30631 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0229 18:18:10.799675   30631 command_runner.go:130] > # ]
	I0229 18:18:10.799680   30631 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0229 18:18:10.799686   30631 command_runner.go:130] > [crio.metrics]
	I0229 18:18:10.799693   30631 command_runner.go:130] > # Globally enable or disable metrics support.
	I0229 18:18:10.799702   30631 command_runner.go:130] > enable_metrics = true
	I0229 18:18:10.799713   30631 command_runner.go:130] > # Specify enabled metrics collectors.
	I0229 18:18:10.799723   30631 command_runner.go:130] > # Per default all metrics are enabled.
	I0229 18:18:10.799729   30631 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0229 18:18:10.799737   30631 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0229 18:18:10.799743   30631 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0229 18:18:10.799749   30631 command_runner.go:130] > # metrics_collectors = [
	I0229 18:18:10.799753   30631 command_runner.go:130] > # 	"operations",
	I0229 18:18:10.799760   30631 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0229 18:18:10.799764   30631 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0229 18:18:10.799771   30631 command_runner.go:130] > # 	"operations_errors",
	I0229 18:18:10.799776   30631 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0229 18:18:10.799782   30631 command_runner.go:130] > # 	"image_pulls_by_name",
	I0229 18:18:10.799786   30631 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0229 18:18:10.799793   30631 command_runner.go:130] > # 	"image_pulls_failures",
	I0229 18:18:10.799797   30631 command_runner.go:130] > # 	"image_pulls_successes",
	I0229 18:18:10.799804   30631 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0229 18:18:10.799811   30631 command_runner.go:130] > # 	"image_layer_reuse",
	I0229 18:18:10.799816   30631 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0229 18:18:10.799819   30631 command_runner.go:130] > # 	"containers_oom_total",
	I0229 18:18:10.799824   30631 command_runner.go:130] > # 	"containers_oom",
	I0229 18:18:10.799828   30631 command_runner.go:130] > # 	"processes_defunct",
	I0229 18:18:10.799832   30631 command_runner.go:130] > # 	"operations_total",
	I0229 18:18:10.799848   30631 command_runner.go:130] > # 	"operations_latency_seconds",
	I0229 18:18:10.799856   30631 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0229 18:18:10.799859   30631 command_runner.go:130] > # 	"operations_errors_total",
	I0229 18:18:10.799864   30631 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0229 18:18:10.799868   30631 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0229 18:18:10.799875   30631 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0229 18:18:10.799879   30631 command_runner.go:130] > # 	"image_pulls_success_total",
	I0229 18:18:10.799885   30631 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0229 18:18:10.799890   30631 command_runner.go:130] > # 	"containers_oom_count_total",
	I0229 18:18:10.799897   30631 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0229 18:18:10.799901   30631 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0229 18:18:10.799904   30631 command_runner.go:130] > # ]
	I0229 18:18:10.799912   30631 command_runner.go:130] > # The port on which the metrics server will listen.
	I0229 18:18:10.799916   30631 command_runner.go:130] > # metrics_port = 9090
	I0229 18:18:10.799923   30631 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0229 18:18:10.799927   30631 command_runner.go:130] > # metrics_socket = ""
	I0229 18:18:10.799931   30631 command_runner.go:130] > # The certificate for the secure metrics server.
	I0229 18:18:10.799939   30631 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0229 18:18:10.799947   30631 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0229 18:18:10.799954   30631 command_runner.go:130] > # certificate on any modification event.
	I0229 18:18:10.799959   30631 command_runner.go:130] > # metrics_cert = ""
	I0229 18:18:10.799966   30631 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0229 18:18:10.799970   30631 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0229 18:18:10.799976   30631 command_runner.go:130] > # metrics_key = ""
	I0229 18:18:10.799982   30631 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0229 18:18:10.799988   30631 command_runner.go:130] > [crio.tracing]
	I0229 18:18:10.799993   30631 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0229 18:18:10.800000   30631 command_runner.go:130] > # enable_tracing = false
	I0229 18:18:10.800005   30631 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0229 18:18:10.800012   30631 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0229 18:18:10.800018   30631 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0229 18:18:10.800025   30631 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0229 18:18:10.800029   30631 command_runner.go:130] > # CRI-O NRI configuration.
	I0229 18:18:10.800035   30631 command_runner.go:130] > [crio.nri]
	I0229 18:18:10.800039   30631 command_runner.go:130] > # Globally enable or disable NRI.
	I0229 18:18:10.800045   30631 command_runner.go:130] > # enable_nri = false
	I0229 18:18:10.800054   30631 command_runner.go:130] > # NRI socket to listen on.
	I0229 18:18:10.800064   30631 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0229 18:18:10.800072   30631 command_runner.go:130] > # NRI plugin directory to use.
	I0229 18:18:10.800081   30631 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0229 18:18:10.800091   30631 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0229 18:18:10.800101   30631 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0229 18:18:10.800112   30631 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0229 18:18:10.800122   30631 command_runner.go:130] > # nri_disable_connections = false
	I0229 18:18:10.800134   30631 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0229 18:18:10.800141   30631 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0229 18:18:10.800146   30631 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0229 18:18:10.800153   30631 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0229 18:18:10.800159   30631 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0229 18:18:10.800165   30631 command_runner.go:130] > [crio.stats]
	I0229 18:18:10.800171   30631 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0229 18:18:10.800178   30631 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0229 18:18:10.800183   30631 command_runner.go:130] > # stats_collection_period = 0
	I0229 18:18:10.800273   30631 cni.go:84] Creating CNI manager for ""
	I0229 18:18:10.800285   30631 cni.go:136] 3 nodes found, recommending kindnet
	I0229 18:18:10.800300   30631 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 18:18:10.800323   30631 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.200 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-051105 NodeName:multinode-051105 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.200"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.200 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 18:18:10.800464   30631 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.200
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-051105"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.200
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.200"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 18:18:10.800539   30631 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-051105 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.200
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-051105 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 18:18:10.800591   30631 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0229 18:18:10.811904   30631 command_runner.go:130] > kubeadm
	I0229 18:18:10.811919   30631 command_runner.go:130] > kubectl
	I0229 18:18:10.811923   30631 command_runner.go:130] > kubelet
	I0229 18:18:10.811936   30631 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 18:18:10.811993   30631 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 18:18:10.824981   30631 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I0229 18:18:10.844422   30631 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 18:18:10.863949   30631 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I0229 18:18:10.883379   30631 ssh_runner.go:195] Run: grep 192.168.39.200	control-plane.minikube.internal$ /etc/hosts
	I0229 18:18:10.887839   30631 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.200	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:18:10.901895   30631 certs.go:56] Setting up /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/multinode-051105 for IP: 192.168.39.200
	I0229 18:18:10.901922   30631 certs.go:190] acquiring lock for shared ca certs: {Name:mk3505d2468da66ac20dc3ebf913782cecf1ba0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:18:10.902058   30631 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18259-6428/.minikube/ca.key
	I0229 18:18:10.902095   30631 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.key
	I0229 18:18:10.902154   30631 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/multinode-051105/client.key
	I0229 18:18:10.902202   30631 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/multinode-051105/apiserver.key.8606e1b3
	I0229 18:18:10.902248   30631 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/multinode-051105/proxy-client.key
	I0229 18:18:10.902258   30631 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/multinode-051105/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0229 18:18:10.902270   30631 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/multinode-051105/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0229 18:18:10.902287   30631 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/multinode-051105/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0229 18:18:10.902299   30631 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/multinode-051105/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0229 18:18:10.902312   30631 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0229 18:18:10.902327   30631 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6428/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0229 18:18:10.902343   30631 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0229 18:18:10.902360   30631 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0229 18:18:10.902414   30631 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651.pem (1338 bytes)
	W0229 18:18:10.902454   30631 certs.go:433] ignoring /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651_empty.pem, impossibly tiny 0 bytes
	I0229 18:18:10.902464   30631 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem (1675 bytes)
	I0229 18:18:10.902483   30631 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem (1082 bytes)
	I0229 18:18:10.902504   30631 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem (1123 bytes)
	I0229 18:18:10.902525   30631 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem (1675 bytes)
	I0229 18:18:10.902563   30631 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem (1708 bytes)
	I0229 18:18:10.902587   30631 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651.pem -> /usr/share/ca-certificates/13651.pem
	I0229 18:18:10.902600   30631 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem -> /usr/share/ca-certificates/136512.pem
	I0229 18:18:10.902611   30631 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:18:10.903181   30631 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/multinode-051105/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 18:18:10.931004   30631 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/multinode-051105/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0229 18:18:10.958012   30631 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/multinode-051105/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 18:18:10.989980   30631 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/multinode-051105/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 18:18:11.019177   30631 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 18:18:11.046152   30631 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0229 18:18:11.073174   30631 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 18:18:11.100452   30631 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0229 18:18:11.128622   30631 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651.pem --> /usr/share/ca-certificates/13651.pem (1338 bytes)
	I0229 18:18:11.155454   30631 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem --> /usr/share/ca-certificates/136512.pem (1708 bytes)
	I0229 18:18:11.182132   30631 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 18:18:11.208969   30631 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 18:18:11.228570   30631 ssh_runner.go:195] Run: openssl version
	I0229 18:18:11.234902   30631 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0229 18:18:11.234975   30631 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 18:18:11.247313   30631 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:18:11.252478   30631 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb 29 17:40 /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:18:11.252501   30631 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 17:40 /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:18:11.252531   30631 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:18:11.258959   30631 command_runner.go:130] > b5213941
	I0229 18:18:11.259248   30631 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 18:18:11.273631   30631 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13651.pem && ln -fs /usr/share/ca-certificates/13651.pem /etc/ssl/certs/13651.pem"
	I0229 18:18:11.287917   30631 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13651.pem
	I0229 18:18:11.292984   30631 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb 29 17:49 /usr/share/ca-certificates/13651.pem
	I0229 18:18:11.293194   30631 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 17:49 /usr/share/ca-certificates/13651.pem
	I0229 18:18:11.293254   30631 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13651.pem
	I0229 18:18:11.299911   30631 command_runner.go:130] > 51391683
	I0229 18:18:11.299978   30631 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13651.pem /etc/ssl/certs/51391683.0"
	I0229 18:18:11.314499   30631 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136512.pem && ln -fs /usr/share/ca-certificates/136512.pem /etc/ssl/certs/136512.pem"
	I0229 18:18:11.328855   30631 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136512.pem
	I0229 18:18:11.334210   30631 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb 29 17:49 /usr/share/ca-certificates/136512.pem
	I0229 18:18:11.334482   30631 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 17:49 /usr/share/ca-certificates/136512.pem
	I0229 18:18:11.334521   30631 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136512.pem
	I0229 18:18:11.340947   30631 command_runner.go:130] > 3ec20f2e
	I0229 18:18:11.341212   30631 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136512.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 18:18:11.355857   30631 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 18:18:11.361118   30631 command_runner.go:130] > ca.crt
	I0229 18:18:11.361132   30631 command_runner.go:130] > ca.key
	I0229 18:18:11.361137   30631 command_runner.go:130] > healthcheck-client.crt
	I0229 18:18:11.361159   30631 command_runner.go:130] > healthcheck-client.key
	I0229 18:18:11.361164   30631 command_runner.go:130] > peer.crt
	I0229 18:18:11.361168   30631 command_runner.go:130] > peer.key
	I0229 18:18:11.361172   30631 command_runner.go:130] > server.crt
	I0229 18:18:11.361176   30631 command_runner.go:130] > server.key
	I0229 18:18:11.361225   30631 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0229 18:18:11.367999   30631 command_runner.go:130] > Certificate will not expire
	I0229 18:18:11.368288   30631 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0229 18:18:11.375132   30631 command_runner.go:130] > Certificate will not expire
	I0229 18:18:11.375199   30631 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0229 18:18:11.381783   30631 command_runner.go:130] > Certificate will not expire
	I0229 18:18:11.381840   30631 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0229 18:18:11.388574   30631 command_runner.go:130] > Certificate will not expire
	I0229 18:18:11.388717   30631 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0229 18:18:11.395277   30631 command_runner.go:130] > Certificate will not expire
	I0229 18:18:11.395507   30631 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0229 18:18:11.402055   30631 command_runner.go:130] > Certificate will not expire
	I0229 18:18:11.402111   30631 kubeadm.go:404] StartCluster: {Name:multinode-051105 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.4 ClusterName:multinode-051105 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.104 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.78 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:fals
e ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:18:11.402276   30631 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0229 18:18:11.402328   30631 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 18:18:11.448846   30631 cri.go:89] found id: ""
	I0229 18:18:11.448940   30631 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 18:18:11.462125   30631 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0229 18:18:11.462143   30631 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0229 18:18:11.462149   30631 command_runner.go:130] > /var/lib/minikube/etcd:
	I0229 18:18:11.462153   30631 command_runner.go:130] > member
	I0229 18:18:11.462162   30631 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0229 18:18:11.462167   30631 kubeadm.go:636] restartCluster start
	I0229 18:18:11.462211   30631 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0229 18:18:11.474635   30631 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:18:11.475214   30631 kubeconfig.go:92] found "multinode-051105" server: "https://192.168.39.200:8443"
	I0229 18:18:11.475620   30631 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18259-6428/kubeconfig
	I0229 18:18:11.475827   30631 kapi.go:59] client config for multinode-051105: &rest.Config{Host:"https://192.168.39.200:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18259-6428/.minikube/profiles/multinode-051105/client.crt", KeyFile:"/home/jenkins/minikube-integration/18259-6428/.minikube/profiles/multinode-051105/client.key", CAFile:"/home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5d0e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 18:18:11.476282   30631 cert_rotation.go:137] Starting client certificate rotation controller
	I0229 18:18:11.476415   30631 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0229 18:18:11.488070   30631 api_server.go:166] Checking apiserver status ...
	I0229 18:18:11.488135   30631 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:18:11.502452   30631 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:18:11.989094   30631 api_server.go:166] Checking apiserver status ...
	I0229 18:18:11.989157   30631 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:18:12.004842   30631 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:18:12.488344   30631 api_server.go:166] Checking apiserver status ...
	I0229 18:18:12.488408   30631 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:18:12.503790   30631 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:18:12.988306   30631 api_server.go:166] Checking apiserver status ...
	I0229 18:18:12.988411   30631 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:18:13.005198   30631 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:18:13.488941   30631 api_server.go:166] Checking apiserver status ...
	I0229 18:18:13.489047   30631 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:18:13.502026   30631 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:18:13.988180   30631 api_server.go:166] Checking apiserver status ...
	I0229 18:18:13.988265   30631 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:18:14.001872   30631 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:18:14.488457   30631 api_server.go:166] Checking apiserver status ...
	I0229 18:18:14.488525   30631 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:18:14.503176   30631 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:18:14.988820   30631 api_server.go:166] Checking apiserver status ...
	I0229 18:18:14.988917   30631 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:18:15.002464   30631 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:18:15.489089   30631 api_server.go:166] Checking apiserver status ...
	I0229 18:18:15.489157   30631 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:18:15.503375   30631 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:18:15.989002   30631 api_server.go:166] Checking apiserver status ...
	I0229 18:18:15.989069   30631 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:18:16.003161   30631 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:18:16.489147   30631 api_server.go:166] Checking apiserver status ...
	I0229 18:18:16.489250   30631 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:18:16.502854   30631 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:18:16.988415   30631 api_server.go:166] Checking apiserver status ...
	I0229 18:18:16.988495   30631 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:18:17.003233   30631 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:18:17.488819   30631 api_server.go:166] Checking apiserver status ...
	I0229 18:18:17.488893   30631 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:18:17.502584   30631 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:18:17.988147   30631 api_server.go:166] Checking apiserver status ...
	I0229 18:18:17.988216   30631 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:18:18.001684   30631 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:18:18.488611   30631 api_server.go:166] Checking apiserver status ...
	I0229 18:18:18.488677   30631 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:18:18.502229   30631 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:18:18.988834   30631 api_server.go:166] Checking apiserver status ...
	I0229 18:18:18.988912   30631 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:18:19.002592   30631 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:18:19.488115   30631 api_server.go:166] Checking apiserver status ...
	I0229 18:18:19.488188   30631 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:18:19.502242   30631 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:18:19.988854   30631 api_server.go:166] Checking apiserver status ...
	I0229 18:18:19.988967   30631 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:18:20.002438   30631 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:18:20.489087   30631 api_server.go:166] Checking apiserver status ...
	I0229 18:18:20.489156   30631 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:18:20.502887   30631 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:18:20.988450   30631 api_server.go:166] Checking apiserver status ...
	I0229 18:18:20.988555   30631 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:18:21.002388   30631 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:18:21.488573   30631 api_server.go:166] Checking apiserver status ...
	I0229 18:18:21.488693   30631 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:18:21.502564   30631 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:18:21.502591   30631 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0229 18:18:21.502601   30631 kubeadm.go:1135] stopping kube-system containers ...
	I0229 18:18:21.502641   30631 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0229 18:18:21.502715   30631 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 18:18:21.548098   30631 cri.go:89] found id: ""
	I0229 18:18:21.548158   30631 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0229 18:18:21.569023   30631 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 18:18:21.580518   30631 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0229 18:18:21.580667   30631 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0229 18:18:21.581146   30631 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0229 18:18:21.581503   30631 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 18:18:21.581925   30631 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 18:18:21.581974   30631 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 18:18:21.592556   30631 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0229 18:18:21.592583   30631 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:18:21.689109   30631 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 18:18:21.689415   30631 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0229 18:18:21.689834   30631 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0229 18:18:21.690320   30631 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 18:18:21.690886   30631 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0229 18:18:21.691396   30631 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0229 18:18:21.692193   30631 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0229 18:18:21.692585   30631 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0229 18:18:21.693088   30631 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0229 18:18:21.693408   30631 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 18:18:21.693799   30631 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 18:18:21.695792   30631 command_runner.go:130] > [certs] Using the existing "sa" key
	I0229 18:18:21.695960   30631 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:18:22.889130   30631 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 18:18:22.889158   30631 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 18:18:22.889165   30631 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 18:18:22.889171   30631 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 18:18:22.889177   30631 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 18:18:22.889319   30631 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.193329269s)
	I0229 18:18:22.889350   30631 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:18:22.961413   30631 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 18:18:22.964534   30631 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 18:18:22.964744   30631 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0229 18:18:23.104847   30631 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:18:23.175371   30631 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 18:18:23.175410   30631 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 18:18:23.179935   30631 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 18:18:23.181777   30631 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 18:18:23.183772   30631 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:18:23.264524   30631 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 18:18:23.268937   30631 api_server.go:52] waiting for apiserver process to appear ...
	I0229 18:18:23.269011   30631 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:18:23.769243   30631 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:18:24.269931   30631 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:18:24.290756   30631 command_runner.go:130] > 1086
	I0229 18:18:24.292468   30631 api_server.go:72] duration metric: took 1.023534032s to wait for apiserver process to appear ...
	I0229 18:18:24.292488   30631 api_server.go:88] waiting for apiserver healthz status ...
	I0229 18:18:24.292507   30631 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I0229 18:18:24.292990   30631 api_server.go:269] stopped: https://192.168.39.200:8443/healthz: Get "https://192.168.39.200:8443/healthz": dial tcp 192.168.39.200:8443: connect: connection refused
	I0229 18:18:24.792544   30631 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I0229 18:18:27.223110   30631 api_server.go:279] https://192.168.39.200:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 18:18:27.223139   30631 api_server.go:103] status: https://192.168.39.200:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 18:18:27.223154   30631 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I0229 18:18:27.287922   30631 api_server.go:279] https://192.168.39.200:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 18:18:27.287946   30631 api_server.go:103] status: https://192.168.39.200:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 18:18:27.293096   30631 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I0229 18:18:27.311940   30631 api_server.go:279] https://192.168.39.200:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 18:18:27.311967   30631 api_server.go:103] status: https://192.168.39.200:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 18:18:27.793367   30631 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I0229 18:18:27.801561   30631 api_server.go:279] https://192.168.39.200:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 18:18:27.801593   30631 api_server.go:103] status: https://192.168.39.200:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 18:18:28.293448   30631 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I0229 18:18:28.301799   30631 api_server.go:279] https://192.168.39.200:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 18:18:28.301829   30631 api_server.go:103] status: https://192.168.39.200:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 18:18:28.793430   30631 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I0229 18:18:28.797998   30631 api_server.go:279] https://192.168.39.200:8443/healthz returned 200:
	ok
	I0229 18:18:28.798084   30631 round_trippers.go:463] GET https://192.168.39.200:8443/version
	I0229 18:18:28.798095   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:28.798107   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:28.798117   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:28.804766   30631 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 18:18:28.804783   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:28.804789   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:28.804794   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:28.804797   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:28.804810   30631 round_trippers.go:580]     Content-Length: 264
	I0229 18:18:28.804813   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:28 GMT
	I0229 18:18:28.804815   30631 round_trippers.go:580]     Audit-Id: 54daed22-e4aa-4a71-b194-0cedd187557b
	I0229 18:18:28.804819   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:28.804847   30631 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0229 18:18:28.804918   30631 api_server.go:141] control plane version: v1.28.4
	I0229 18:18:28.804933   30631 api_server.go:131] duration metric: took 4.51243994s to wait for apiserver health ...
	I0229 18:18:28.804940   30631 cni.go:84] Creating CNI manager for ""
	I0229 18:18:28.804945   30631 cni.go:136] 3 nodes found, recommending kindnet
	I0229 18:18:28.806840   30631 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0229 18:18:28.808248   30631 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0229 18:18:28.814128   30631 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0229 18:18:28.814151   30631 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0229 18:18:28.814160   30631 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0229 18:18:28.814171   30631 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0229 18:18:28.814177   30631 command_runner.go:130] > Access: 2024-02-29 18:17:58.604411532 +0000
	I0229 18:18:28.814186   30631 command_runner.go:130] > Modify: 2024-02-23 03:39:37.000000000 +0000
	I0229 18:18:28.814196   30631 command_runner.go:130] > Change: 2024-02-29 18:17:57.283411532 +0000
	I0229 18:18:28.814202   30631 command_runner.go:130] >  Birth: -
	I0229 18:18:28.814453   30631 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0229 18:18:28.814472   30631 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0229 18:18:28.834268   30631 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0229 18:18:29.808251   30631 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0229 18:18:29.814040   30631 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0229 18:18:29.816782   30631 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0229 18:18:29.830119   30631 command_runner.go:130] > daemonset.apps/kindnet configured
	I0229 18:18:29.832739   30631 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 18:18:29.832872   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods
	I0229 18:18:29.832884   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:29.832892   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:29.832897   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:29.837857   30631 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 18:18:29.837872   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:29.837878   30631 round_trippers.go:580]     Audit-Id: aa1086b7-6885-4709-8a3f-0e8b4477aaa0
	I0229 18:18:29.837882   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:29.837885   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:29.837888   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:29.837890   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:29.837892   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:29 GMT
	I0229 18:18:29.839278   30631 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"818"},"items":[{"metadata":{"name":"coredns-5dd5756b68-bwhnb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a3853502-49ad-4d24-8c63-3000e4f4aa8e","resourceVersion":"807","creationTimestamp":"2024-02-29T18:07:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"706ca334-c077-4f00-987d-5093fda91a27","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:07:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"706ca334-c077-4f00-987d-5093fda91a27\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82839 chars]
	I0229 18:18:29.844631   30631 system_pods.go:59] 12 kube-system pods found
	I0229 18:18:29.844666   30631 system_pods.go:61] "coredns-5dd5756b68-bwhnb" [a3853502-49ad-4d24-8c63-3000e4f4aa8e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 18:18:29.844679   30631 system_pods.go:61] "etcd-multinode-051105" [e73d8125-9770-4ddf-a382-a19adc1ed94f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0229 18:18:29.844691   30631 system_pods.go:61] "kindnet-c2ztr" [c5679d05-61cd-4fc6-8fc0-93481b041891] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0229 18:18:29.844700   30631 system_pods.go:61] "kindnet-kvkf2" [207f0896-6db7-45e5-9278-bffc8efa19c1] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0229 18:18:29.844710   30631 system_pods.go:61] "kindnet-r2q5q" [4cdb5152-fbe1-4c9c-88ac-ec1fa682f3d9] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0229 18:18:29.844722   30631 system_pods.go:61] "kube-apiserver-multinode-051105" [722abb81-d303-4fa9-bcbb-8c16aaf4421d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0229 18:18:29.844733   30631 system_pods.go:61] "kube-controller-manager-multinode-051105" [a3156cba-a585-47c6-8b26-2069af0021ce] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0229 18:18:29.844743   30631 system_pods.go:61] "kube-proxy-cbl8s" [352ba5ff-0a79-4766-8a3f-a0860aad1b91] Running
	I0229 18:18:29.844748   30631 system_pods.go:61] "kube-proxy-jfw9f" [45e1b79c-2d6b-4169-a6f0-a3949ec4bc6f] Running
	I0229 18:18:29.844760   30631 system_pods.go:61] "kube-proxy-wvhlx" [5548dfdd-2cda-48bc-9359-95eda53437d4] Running
	I0229 18:18:29.844769   30631 system_pods.go:61] "kube-scheduler-multinode-051105" [de579522-4a2a-4a66-86f0-8fd37603bb85] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0229 18:18:29.844778   30631 system_pods.go:61] "storage-provisioner" [40d74dfd-e4ca-4a17-bed1-24ab6dfd37b4] Running
	I0229 18:18:29.844787   30631 system_pods.go:74] duration metric: took 12.031954ms to wait for pod list to return data ...
	I0229 18:18:29.844796   30631 node_conditions.go:102] verifying NodePressure condition ...
	I0229 18:18:29.844853   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes
	I0229 18:18:29.844862   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:29.844872   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:29.844877   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:29.847785   30631 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:18:29.847798   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:29.847804   30631 round_trippers.go:580]     Audit-Id: 64a4132c-9e60-4699-9e93-7b8cf7ee8304
	I0229 18:18:29.847808   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:29.847810   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:29.847813   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:29.847817   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:29.847821   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:29 GMT
	I0229 18:18:29.849128   30631 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"818"},"items":[{"metadata":{"name":"multinode-051105","uid":"614122aa-9203-4f41-a34b-07331562af09","resourceVersion":"804","creationTimestamp":"2024-02-29T18:06:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-051105","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-051105","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_07_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 16475 chars]
	I0229 18:18:29.849943   30631 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 18:18:29.849970   30631 node_conditions.go:123] node cpu capacity is 2
	I0229 18:18:29.849979   30631 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 18:18:29.849983   30631 node_conditions.go:123] node cpu capacity is 2
	I0229 18:18:29.849987   30631 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 18:18:29.849991   30631 node_conditions.go:123] node cpu capacity is 2
	I0229 18:18:29.850000   30631 node_conditions.go:105] duration metric: took 5.1975ms to run NodePressure ...
	I0229 18:18:29.850019   30631 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:18:30.010872   30631 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0229 18:18:30.075080   30631 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0229 18:18:30.076868   30631 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0229 18:18:30.076979   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0229 18:18:30.076992   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:30.077003   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:30.077011   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:30.079886   30631 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:18:30.079906   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:30.079916   30631 round_trippers.go:580]     Audit-Id: bf3afd0d-2eda-407d-b402-21b7b1015713
	I0229 18:18:30.079937   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:30.079940   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:30.079946   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:30.079950   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:30.079954   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:30 GMT
	I0229 18:18:30.080632   30631 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"820"},"items":[{"metadata":{"name":"etcd-multinode-051105","namespace":"kube-system","uid":"e73d8125-9770-4ddf-a382-a19adc1ed94f","resourceVersion":"802","creationTimestamp":"2024-02-29T18:07:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.200:2379","kubernetes.io/config.hash":"a3ee17954369c56d68a333413809975f","kubernetes.io/config.mirror":"a3ee17954369c56d68a333413809975f","kubernetes.io/config.seen":"2024-02-29T18:06:55.285569285Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-051105","uid":"614122aa-9203-4f41-a34b-07331562af09","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:07:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:k [truncated 28735 chars]
	I0229 18:18:30.081966   30631 kubeadm.go:787] kubelet initialised
	I0229 18:18:30.081989   30631 kubeadm.go:788] duration metric: took 5.1021ms waiting for restarted kubelet to initialise ...
	I0229 18:18:30.082002   30631 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 18:18:30.082068   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods
	I0229 18:18:30.082079   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:30.082093   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:30.082099   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:30.085154   30631 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 18:18:30.085172   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:30.085182   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:30.085187   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:30.085192   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:30 GMT
	I0229 18:18:30.085195   30631 round_trippers.go:580]     Audit-Id: 663c39b4-b195-489e-b93d-0d8baba7879f
	I0229 18:18:30.085199   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:30.085203   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:30.086237   30631 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"820"},"items":[{"metadata":{"name":"coredns-5dd5756b68-bwhnb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a3853502-49ad-4d24-8c63-3000e4f4aa8e","resourceVersion":"807","creationTimestamp":"2024-02-29T18:07:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"706ca334-c077-4f00-987d-5093fda91a27","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:07:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"706ca334-c077-4f00-987d-5093fda91a27\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82839 chars]
	I0229 18:18:30.089765   30631 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-bwhnb" in "kube-system" namespace to be "Ready" ...
	I0229 18:18:30.089849   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bwhnb
	I0229 18:18:30.089858   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:30.089868   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:30.089875   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:30.091900   30631 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:18:30.091915   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:30.091923   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:30.091929   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:30.091934   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:30.091938   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:30.091942   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:30 GMT
	I0229 18:18:30.091947   30631 round_trippers.go:580]     Audit-Id: 322e7e68-b88f-4e85-8641-51ec0336188e
	I0229 18:18:30.092143   30631 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bwhnb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a3853502-49ad-4d24-8c63-3000e4f4aa8e","resourceVersion":"807","creationTimestamp":"2024-02-29T18:07:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"706ca334-c077-4f00-987d-5093fda91a27","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:07:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"706ca334-c077-4f00-987d-5093fda91a27\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0229 18:18:30.092505   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/multinode-051105
	I0229 18:18:30.092516   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:30.092522   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:30.092526   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:30.094531   30631 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0229 18:18:30.094543   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:30.094555   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:30.094559   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:30 GMT
	I0229 18:18:30.094563   30631 round_trippers.go:580]     Audit-Id: 23142e7d-6417-484a-85c0-836606dcb7e5
	I0229 18:18:30.094566   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:30.094572   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:30.094576   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:30.094838   30631 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-051105","uid":"614122aa-9203-4f41-a34b-07331562af09","resourceVersion":"804","creationTimestamp":"2024-02-29T18:06:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-051105","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-051105","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_07_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T18:06:59Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0229 18:18:30.095235   30631 pod_ready.go:97] node "multinode-051105" hosting pod "coredns-5dd5756b68-bwhnb" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-051105" has status "Ready":"False"
	I0229 18:18:30.095255   30631 pod_ready.go:81] duration metric: took 5.468251ms waiting for pod "coredns-5dd5756b68-bwhnb" in "kube-system" namespace to be "Ready" ...
	E0229 18:18:30.095266   30631 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-051105" hosting pod "coredns-5dd5756b68-bwhnb" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-051105" has status "Ready":"False"
	I0229 18:18:30.095273   30631 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-051105" in "kube-system" namespace to be "Ready" ...
	I0229 18:18:30.095344   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-051105
	I0229 18:18:30.095356   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:30.095364   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:30.095368   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:30.096995   30631 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0229 18:18:30.097009   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:30.097014   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:30 GMT
	I0229 18:18:30.097018   30631 round_trippers.go:580]     Audit-Id: 2f598f8c-4a38-423c-82ae-c48f94f30f97
	I0229 18:18:30.097022   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:30.097025   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:30.097028   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:30.097031   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:30.097161   30631 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-051105","namespace":"kube-system","uid":"e73d8125-9770-4ddf-a382-a19adc1ed94f","resourceVersion":"802","creationTimestamp":"2024-02-29T18:07:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.200:2379","kubernetes.io/config.hash":"a3ee17954369c56d68a333413809975f","kubernetes.io/config.mirror":"a3ee17954369c56d68a333413809975f","kubernetes.io/config.seen":"2024-02-29T18:06:55.285569285Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-051105","uid":"614122aa-9203-4f41-a34b-07331562af09","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:07:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6049 chars]
	I0229 18:18:30.097566   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/multinode-051105
	I0229 18:18:30.097582   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:30.097592   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:30.097599   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:30.099469   30631 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0229 18:18:30.099483   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:30.099491   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:30.099495   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:30.099498   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:30.099500   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:30.099504   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:30 GMT
	I0229 18:18:30.099507   30631 round_trippers.go:580]     Audit-Id: 4774c818-2876-4e91-bd8a-c8db0ff3cdc9
	I0229 18:18:30.099747   30631 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-051105","uid":"614122aa-9203-4f41-a34b-07331562af09","resourceVersion":"804","creationTimestamp":"2024-02-29T18:06:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-051105","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-051105","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_07_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T18:06:59Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0229 18:18:30.100117   30631 pod_ready.go:97] node "multinode-051105" hosting pod "etcd-multinode-051105" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-051105" has status "Ready":"False"
	I0229 18:18:30.100140   30631 pod_ready.go:81] duration metric: took 4.857661ms waiting for pod "etcd-multinode-051105" in "kube-system" namespace to be "Ready" ...
	E0229 18:18:30.100150   30631 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-051105" hosting pod "etcd-multinode-051105" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-051105" has status "Ready":"False"
	I0229 18:18:30.100169   30631 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-051105" in "kube-system" namespace to be "Ready" ...
	I0229 18:18:30.100230   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-051105
	I0229 18:18:30.100240   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:30.100249   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:30.100255   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:30.101805   30631 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0229 18:18:30.101816   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:30.101821   30631 round_trippers.go:580]     Audit-Id: 26c90093-0f52-45b8-bd20-c354c93cd70d
	I0229 18:18:30.101825   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:30.101829   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:30.101832   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:30.101839   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:30.101843   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:30 GMT
	I0229 18:18:30.101997   30631 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-051105","namespace":"kube-system","uid":"722abb81-d303-4fa9-bcbb-8c16aaf4421d","resourceVersion":"803","creationTimestamp":"2024-02-29T18:07:02Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.200:8443","kubernetes.io/config.hash":"716aea331c832180bd818bead2d6fe09","kubernetes.io/config.mirror":"716aea331c832180bd818bead2d6fe09","kubernetes.io/config.seen":"2024-02-29T18:07:02.423715355Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-051105","uid":"614122aa-9203-4f41-a34b-07331562af09","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:07:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7595 chars]
	I0229 18:18:30.102339   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/multinode-051105
	I0229 18:18:30.102352   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:30.102359   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:30.102364   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:30.104161   30631 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0229 18:18:30.104172   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:30.104177   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:30 GMT
	I0229 18:18:30.104181   30631 round_trippers.go:580]     Audit-Id: 993ea043-0f3e-458a-afde-1c1bb33d5b5f
	I0229 18:18:30.104189   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:30.104192   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:30.104195   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:30.104198   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:30.104391   30631 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-051105","uid":"614122aa-9203-4f41-a34b-07331562af09","resourceVersion":"804","creationTimestamp":"2024-02-29T18:06:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-051105","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-051105","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_07_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T18:06:59Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0229 18:18:30.104742   30631 pod_ready.go:97] node "multinode-051105" hosting pod "kube-apiserver-multinode-051105" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-051105" has status "Ready":"False"
	I0229 18:18:30.104763   30631 pod_ready.go:81] duration metric: took 4.580935ms waiting for pod "kube-apiserver-multinode-051105" in "kube-system" namespace to be "Ready" ...
	E0229 18:18:30.104772   30631 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-051105" hosting pod "kube-apiserver-multinode-051105" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-051105" has status "Ready":"False"
	I0229 18:18:30.104780   30631 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-051105" in "kube-system" namespace to be "Ready" ...
	I0229 18:18:30.104830   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-051105
	I0229 18:18:30.104841   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:30.104857   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:30.104865   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:30.108855   30631 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 18:18:30.108869   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:30.108877   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:30.108882   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:30 GMT
	I0229 18:18:30.108887   30631 round_trippers.go:580]     Audit-Id: 1cb20467-2ea1-4ed2-afde-41645b86d5b3
	I0229 18:18:30.108892   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:30.108896   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:30.108900   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:30.109158   30631 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-051105","namespace":"kube-system","uid":"a3156cba-a585-47c6-8b26-2069af0021ce","resourceVersion":"805","creationTimestamp":"2024-02-29T18:07:00Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"12776d77f75f6cff787ef977dae61db7","kubernetes.io/config.mirror":"12776d77f75f6cff787ef977dae61db7","kubernetes.io/config.seen":"2024-02-29T18:06:55.285572192Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-051105","uid":"614122aa-9203-4f41-a34b-07331562af09","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:07:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7169 chars]
	I0229 18:18:30.233835   30631 request.go:629] Waited for 124.260932ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/nodes/multinode-051105
	I0229 18:18:30.233900   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/multinode-051105
	I0229 18:18:30.233905   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:30.233912   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:30.233917   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:30.236861   30631 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:18:30.236882   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:30.236892   30631 round_trippers.go:580]     Audit-Id: 0b3206f6-9870-4401-a23a-4e05b0458ba0
	I0229 18:18:30.236896   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:30.236901   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:30.236906   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:30.236912   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:30.236916   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:30 GMT
	I0229 18:18:30.237039   30631 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-051105","uid":"614122aa-9203-4f41-a34b-07331562af09","resourceVersion":"804","creationTimestamp":"2024-02-29T18:06:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-051105","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-051105","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_07_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T18:06:59Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0229 18:18:30.237423   30631 pod_ready.go:97] node "multinode-051105" hosting pod "kube-controller-manager-multinode-051105" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-051105" has status "Ready":"False"
	I0229 18:18:30.237447   30631 pod_ready.go:81] duration metric: took 132.659023ms waiting for pod "kube-controller-manager-multinode-051105" in "kube-system" namespace to be "Ready" ...
	E0229 18:18:30.237455   30631 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-051105" hosting pod "kube-controller-manager-multinode-051105" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-051105" has status "Ready":"False"
	I0229 18:18:30.237462   30631 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-cbl8s" in "kube-system" namespace to be "Ready" ...
	I0229 18:18:30.433881   30631 request.go:629] Waited for 196.360213ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cbl8s
	I0229 18:18:30.433979   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cbl8s
	I0229 18:18:30.433992   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:30.434002   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:30.434008   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:30.437584   30631 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 18:18:30.437605   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:30.437615   30631 round_trippers.go:580]     Audit-Id: 44a7a404-ec5d-41e3-819b-4b21bf1f8e40
	I0229 18:18:30.437620   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:30.437624   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:30.437630   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:30.437634   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:30.437638   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:30 GMT
	I0229 18:18:30.437968   30631 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-cbl8s","generateName":"kube-proxy-","namespace":"kube-system","uid":"352ba5ff-0a79-4766-8a3f-a0860aad1b91","resourceVersion":"574","creationTimestamp":"2024-02-29T18:09:08Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"811deb55-d749-4c76-9949-4d9e40cf5500","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:09:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"811deb55-d749-4c76-9949-4d9e40cf5500\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5492 chars]
	I0229 18:18:30.633773   30631 request.go:629] Waited for 195.376215ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/nodes/multinode-051105-m02
	I0229 18:18:30.633848   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/multinode-051105-m02
	I0229 18:18:30.633856   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:30.633866   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:30.633876   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:30.637874   30631 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 18:18:30.637896   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:30.637906   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:30.637913   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:30 GMT
	I0229 18:18:30.637917   30631 round_trippers.go:580]     Audit-Id: ac2c3166-f844-4361-95d3-d57683befc8a
	I0229 18:18:30.637921   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:30.637934   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:30.637939   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:30.638056   30631 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-051105-m02","uid":"d9c0ff3f-8bc0-4054-a484-27b1793b2e4e","resourceVersion":"818","creationTimestamp":"2024-02-29T18:09:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-051105-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-051105","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T18_10_38_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:09:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 4236 chars]
	I0229 18:18:30.638420   30631 pod_ready.go:92] pod "kube-proxy-cbl8s" in "kube-system" namespace has status "Ready":"True"
	I0229 18:18:30.638447   30631 pod_ready.go:81] duration metric: took 400.978001ms waiting for pod "kube-proxy-cbl8s" in "kube-system" namespace to be "Ready" ...
	I0229 18:18:30.638460   30631 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jfw9f" in "kube-system" namespace to be "Ready" ...
	I0229 18:18:30.833360   30631 request.go:629] Waited for 194.815686ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jfw9f
	I0229 18:18:30.833407   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jfw9f
	I0229 18:18:30.833412   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:30.833419   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:30.833423   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:30.842191   30631 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0229 18:18:30.842215   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:30.842232   30631 round_trippers.go:580]     Audit-Id: 168662c7-2570-4097-b461-1310a54ac98e
	I0229 18:18:30.842239   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:30.842244   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:30.842248   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:30.842253   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:30.842257   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:30 GMT
	I0229 18:18:30.842463   30631 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-jfw9f","generateName":"kube-proxy-","namespace":"kube-system","uid":"45e1b79c-2d6b-4169-a6f0-a3949ec4bc6f","resourceVersion":"780","creationTimestamp":"2024-02-29T18:09:56Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"811deb55-d749-4c76-9949-4d9e40cf5500","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:09:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"811deb55-d749-4c76-9949-4d9e40cf5500\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5488 chars]
	I0229 18:18:31.033314   30631 request.go:629] Waited for 190.342711ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/nodes/multinode-051105-m03
	I0229 18:18:31.033383   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/multinode-051105-m03
	I0229 18:18:31.033405   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:31.033415   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:31.033424   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:31.041515   30631 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0229 18:18:31.041534   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:31.041545   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:31.041551   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:31.041556   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:31 GMT
	I0229 18:18:31.041575   30631 round_trippers.go:580]     Audit-Id: dcca3928-1a8d-4136-944c-516da5224605
	I0229 18:18:31.041578   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:31.041581   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:31.041869   30631 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-051105-m03","uid":"2aa133ce-8b37-4464-acdc-adffba00e813","resourceVersion":"817","creationTimestamp":"2024-02-29T18:10:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-051105-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-051105","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T18_10_38_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:10:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 4084 chars]
	I0229 18:18:31.042237   30631 pod_ready.go:92] pod "kube-proxy-jfw9f" in "kube-system" namespace has status "Ready":"True"
	I0229 18:18:31.042258   30631 pod_ready.go:81] duration metric: took 403.790603ms waiting for pod "kube-proxy-jfw9f" in "kube-system" namespace to be "Ready" ...
	I0229 18:18:31.042271   30631 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-wvhlx" in "kube-system" namespace to be "Ready" ...
	I0229 18:18:31.233629   30631 request.go:629] Waited for 191.279055ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wvhlx
	I0229 18:18:31.233688   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wvhlx
	I0229 18:18:31.233693   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:31.233700   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:31.233705   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:31.238393   30631 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 18:18:31.238409   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:31.238415   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:31.238419   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:31.238423   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:31 GMT
	I0229 18:18:31.238426   30631 round_trippers.go:580]     Audit-Id: b4c1068c-68b0-4370-9d3e-86b53a9a604f
	I0229 18:18:31.238429   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:31.238433   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:31.238548   30631 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-wvhlx","generateName":"kube-proxy-","namespace":"kube-system","uid":"5548dfdd-2cda-48bc-9359-95eda53437d4","resourceVersion":"814","creationTimestamp":"2024-02-29T18:07:14Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"811deb55-d749-4c76-9949-4d9e40cf5500","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:07:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"811deb55-d749-4c76-9949-4d9e40cf5500\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5484 chars]
	I0229 18:18:31.433291   30631 request.go:629] Waited for 194.371054ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/nodes/multinode-051105
	I0229 18:18:31.433343   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/multinode-051105
	I0229 18:18:31.433348   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:31.433374   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:31.433379   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:31.436137   30631 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:18:31.436151   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:31.436157   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:31.436161   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:31.436164   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:31 GMT
	I0229 18:18:31.436169   30631 round_trippers.go:580]     Audit-Id: 68ee14b9-1518-4b46-a255-bfdf70449cdf
	I0229 18:18:31.436173   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:31.436177   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:31.436517   30631 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-051105","uid":"614122aa-9203-4f41-a34b-07331562af09","resourceVersion":"804","creationTimestamp":"2024-02-29T18:06:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-051105","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-051105","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_07_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T18:06:59Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0229 18:18:31.436804   30631 pod_ready.go:97] node "multinode-051105" hosting pod "kube-proxy-wvhlx" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-051105" has status "Ready":"False"
	I0229 18:18:31.436819   30631 pod_ready.go:81] duration metric: took 394.542183ms waiting for pod "kube-proxy-wvhlx" in "kube-system" namespace to be "Ready" ...
	E0229 18:18:31.436827   30631 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-051105" hosting pod "kube-proxy-wvhlx" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-051105" has status "Ready":"False"
	I0229 18:18:31.436833   30631 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-051105" in "kube-system" namespace to be "Ready" ...
	I0229 18:18:31.632938   30631 request.go:629] Waited for 195.99262ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-051105
	I0229 18:18:31.633033   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-051105
	I0229 18:18:31.633041   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:31.633051   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:31.633064   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:31.641065   30631 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0229 18:18:31.641090   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:31.641100   30631 round_trippers.go:580]     Audit-Id: 552092e6-ded7-43e3-974c-33da73f1f415
	I0229 18:18:31.641106   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:31.641113   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:31.641120   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:31.641125   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:31.641130   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:31 GMT
	I0229 18:18:31.641275   30631 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-051105","namespace":"kube-system","uid":"de579522-4a2a-4a66-86f0-8fd37603bb85","resourceVersion":"806","creationTimestamp":"2024-02-29T18:07:00Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"16c1e8bd6ccedfe92575733385fa4d81","kubernetes.io/config.mirror":"16c1e8bd6ccedfe92575733385fa4d81","kubernetes.io/config.seen":"2024-02-29T18:06:55.285517129Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-051105","uid":"614122aa-9203-4f41-a34b-07331562af09","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:07:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4890 chars]
	I0229 18:18:31.832974   30631 request.go:629] Waited for 191.245977ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/nodes/multinode-051105
	I0229 18:18:31.833080   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/multinode-051105
	I0229 18:18:31.833092   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:31.833110   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:31.833121   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:31.835530   30631 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:18:31.835546   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:31.835552   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:31.835555   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:31.835559   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:31 GMT
	I0229 18:18:31.835562   30631 round_trippers.go:580]     Audit-Id: 6c3e562a-440f-46fa-a32b-158cfc5432c5
	I0229 18:18:31.835564   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:31.835567   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:31.835757   30631 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-051105","uid":"614122aa-9203-4f41-a34b-07331562af09","resourceVersion":"804","creationTimestamp":"2024-02-29T18:06:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-051105","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-051105","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_07_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T18:06:59Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0229 18:18:31.836184   30631 pod_ready.go:97] node "multinode-051105" hosting pod "kube-scheduler-multinode-051105" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-051105" has status "Ready":"False"
	I0229 18:18:31.836214   30631 pod_ready.go:81] duration metric: took 399.373793ms waiting for pod "kube-scheduler-multinode-051105" in "kube-system" namespace to be "Ready" ...
	E0229 18:18:31.836226   30631 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-051105" hosting pod "kube-scheduler-multinode-051105" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-051105" has status "Ready":"False"
	I0229 18:18:31.836235   30631 pod_ready.go:38] duration metric: took 1.754218447s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 18:18:31.836258   30631 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 18:18:31.855573   30631 command_runner.go:130] > -16
	I0229 18:18:31.855605   30631 ops.go:34] apiserver oom_adj: -16
	I0229 18:18:31.855613   30631 kubeadm.go:640] restartCluster took 20.393440701s
	I0229 18:18:31.855622   30631 kubeadm.go:406] StartCluster complete in 20.453515134s
	I0229 18:18:31.855651   30631 settings.go:142] acquiring lock: {Name:mk2120f70b8c0f8e9d58905a579415af500b3723 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:18:31.855731   30631 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18259-6428/kubeconfig
	I0229 18:18:31.856600   30631 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/kubeconfig: {Name:mk7125f243525b7f0feb85371d9c568ed8e0cf7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:18:31.856858   30631 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 18:18:31.856996   30631 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 18:18:31.860401   30631 out.go:177] * Enabled addons: 
	I0229 18:18:31.857175   30631 config.go:182] Loaded profile config "multinode-051105": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 18:18:31.857246   30631 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18259-6428/kubeconfig
	I0229 18:18:31.861750   30631 addons.go:505] enable addons completed in 4.75309ms: enabled=[]
	I0229 18:18:31.861972   30631 kapi.go:59] client config for multinode-051105: &rest.Config{Host:"https://192.168.39.200:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18259-6428/.minikube/profiles/multinode-051105/client.crt", KeyFile:"/home/jenkins/minikube-integration/18259-6428/.minikube/profiles/multinode-051105/client.key", CAFile:"/home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5d0e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 18:18:31.862244   30631 round_trippers.go:463] GET https://192.168.39.200:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0229 18:18:31.862253   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:31.862260   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:31.862263   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:31.865022   30631 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:18:31.865038   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:31.865046   30631 round_trippers.go:580]     Audit-Id: bafd99c8-62b7-4aeb-9e8a-dffc0819949d
	I0229 18:18:31.865052   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:31.865056   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:31.865061   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:31.865065   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:31.865071   30631 round_trippers.go:580]     Content-Length: 291
	I0229 18:18:31.865077   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:31 GMT
	I0229 18:18:31.865123   30631 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"980f57f9-4c9b-43a5-b35c-61bcb3268764","resourceVersion":"819","creationTimestamp":"2024-02-29T18:07:02Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0229 18:18:31.865294   30631 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-051105" context rescaled to 1 replicas
	I0229 18:18:31.865323   30631 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0229 18:18:31.866720   30631 out.go:177] * Verifying Kubernetes components...
	I0229 18:18:31.868083   30631 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 18:18:31.982453   30631 command_runner.go:130] > apiVersion: v1
	I0229 18:18:31.982488   30631 command_runner.go:130] > data:
	I0229 18:18:31.982496   30631 command_runner.go:130] >   Corefile: |
	I0229 18:18:31.982502   30631 command_runner.go:130] >     .:53 {
	I0229 18:18:31.982508   30631 command_runner.go:130] >         log
	I0229 18:18:31.982514   30631 command_runner.go:130] >         errors
	I0229 18:18:31.982520   30631 command_runner.go:130] >         health {
	I0229 18:18:31.982526   30631 command_runner.go:130] >            lameduck 5s
	I0229 18:18:31.982531   30631 command_runner.go:130] >         }
	I0229 18:18:31.982542   30631 command_runner.go:130] >         ready
	I0229 18:18:31.982549   30631 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0229 18:18:31.982555   30631 command_runner.go:130] >            pods insecure
	I0229 18:18:31.982565   30631 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0229 18:18:31.982571   30631 command_runner.go:130] >            ttl 30
	I0229 18:18:31.982579   30631 command_runner.go:130] >         }
	I0229 18:18:31.982586   30631 command_runner.go:130] >         prometheus :9153
	I0229 18:18:31.982592   30631 command_runner.go:130] >         hosts {
	I0229 18:18:31.982601   30631 command_runner.go:130] >            192.168.39.1 host.minikube.internal
	I0229 18:18:31.982608   30631 command_runner.go:130] >            fallthrough
	I0229 18:18:31.982613   30631 command_runner.go:130] >         }
	I0229 18:18:31.982621   30631 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0229 18:18:31.982628   30631 command_runner.go:130] >            max_concurrent 1000
	I0229 18:18:31.982645   30631 command_runner.go:130] >         }
	I0229 18:18:31.982650   30631 command_runner.go:130] >         cache 30
	I0229 18:18:31.982662   30631 command_runner.go:130] >         loop
	I0229 18:18:31.982668   30631 command_runner.go:130] >         reload
	I0229 18:18:31.982676   30631 command_runner.go:130] >         loadbalance
	I0229 18:18:31.982682   30631 command_runner.go:130] >     }
	I0229 18:18:31.982692   30631 command_runner.go:130] > kind: ConfigMap
	I0229 18:18:31.982698   30631 command_runner.go:130] > metadata:
	I0229 18:18:31.982706   30631 command_runner.go:130] >   creationTimestamp: "2024-02-29T18:07:02Z"
	I0229 18:18:31.982716   30631 command_runner.go:130] >   name: coredns
	I0229 18:18:31.982722   30631 command_runner.go:130] >   namespace: kube-system
	I0229 18:18:31.982729   30631 command_runner.go:130] >   resourceVersion: "402"
	I0229 18:18:31.982749   30631 command_runner.go:130] >   uid: 3eea14cc-79f0-44e7-a941-620aa593b02d
	I0229 18:18:31.985829   30631 node_ready.go:35] waiting up to 6m0s for node "multinode-051105" to be "Ready" ...
	I0229 18:18:31.985926   30631 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0229 18:18:32.033242   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/multinode-051105
	I0229 18:18:32.033266   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:32.033278   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:32.033286   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:32.035992   30631 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:18:32.036015   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:32.036025   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:32.036032   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:32.036037   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:32.036041   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:32.036045   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:32 GMT
	I0229 18:18:32.036049   30631 round_trippers.go:580]     Audit-Id: 1bd2958a-db09-43ba-8e81-a5662b16f1dd
	I0229 18:18:32.036387   30631 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-051105","uid":"614122aa-9203-4f41-a34b-07331562af09","resourceVersion":"804","creationTimestamp":"2024-02-29T18:06:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-051105","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-051105","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_07_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T18:06:59Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0229 18:18:32.487054   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/multinode-051105
	I0229 18:18:32.487077   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:32.487100   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:32.487107   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:32.489913   30631 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:18:32.489936   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:32.489952   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:32.489957   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:32.489962   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:32.489969   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:32.489972   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:32 GMT
	I0229 18:18:32.489976   30631 round_trippers.go:580]     Audit-Id: abb49991-1328-4982-8275-6e52478b91cc
	I0229 18:18:32.490187   30631 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-051105","uid":"614122aa-9203-4f41-a34b-07331562af09","resourceVersion":"804","creationTimestamp":"2024-02-29T18:06:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-051105","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-051105","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_07_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T18:06:59Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0229 18:18:32.986853   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/multinode-051105
	I0229 18:18:32.986883   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:32.986890   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:32.986894   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:32.991652   30631 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 18:18:32.991671   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:32.991677   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:32.991682   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:32.991685   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:32.991688   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:32 GMT
	I0229 18:18:32.991691   30631 round_trippers.go:580]     Audit-Id: 24f7bc85-515f-4090-ba60-4260f5fce7b6
	I0229 18:18:32.991695   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:32.992102   30631 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-051105","uid":"614122aa-9203-4f41-a34b-07331562af09","resourceVersion":"804","creationTimestamp":"2024-02-29T18:06:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-051105","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-051105","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_07_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T18:06:59Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0229 18:18:33.486808   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/multinode-051105
	I0229 18:18:33.486829   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:33.486838   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:33.486847   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:33.489953   30631 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 18:18:33.489978   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:33.489988   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:33.489994   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:33 GMT
	I0229 18:18:33.490001   30631 round_trippers.go:580]     Audit-Id: e8138f6b-cdb3-41e2-a876-be7c006e5174
	I0229 18:18:33.490007   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:33.490013   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:33.490017   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:33.490308   30631 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-051105","uid":"614122aa-9203-4f41-a34b-07331562af09","resourceVersion":"922","creationTimestamp":"2024-02-29T18:06:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-051105","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-051105","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_07_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T18:06:59Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0229 18:18:33.490686   30631 node_ready.go:49] node "multinode-051105" has status "Ready":"True"
	I0229 18:18:33.490707   30631 node_ready.go:38] duration metric: took 1.504855573s waiting for node "multinode-051105" to be "Ready" ...
	I0229 18:18:33.490715   30631 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 18:18:33.490768   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods
	I0229 18:18:33.490776   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:33.490784   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:33.490789   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:33.502727   30631 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0229 18:18:33.502742   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:33.502748   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:33 GMT
	I0229 18:18:33.502752   30631 round_trippers.go:580]     Audit-Id: a5239646-b332-4e96-8dcc-5a1a09d41107
	I0229 18:18:33.502755   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:33.502758   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:33.502761   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:33.502763   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:33.505248   30631 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"923"},"items":[{"metadata":{"name":"coredns-5dd5756b68-bwhnb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a3853502-49ad-4d24-8c63-3000e4f4aa8e","resourceVersion":"807","creationTimestamp":"2024-02-29T18:07:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"706ca334-c077-4f00-987d-5093fda91a27","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:07:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"706ca334-c077-4f00-987d-5093fda91a27\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82555 chars]
	I0229 18:18:33.507770   30631 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-bwhnb" in "kube-system" namespace to be "Ready" ...
	I0229 18:18:33.507847   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bwhnb
	I0229 18:18:33.507855   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:33.507867   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:33.507872   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:33.509787   30631 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0229 18:18:33.509804   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:33.509813   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:33.509820   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:33.509825   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:33 GMT
	I0229 18:18:33.509829   30631 round_trippers.go:580]     Audit-Id: be8e4fa0-b9b5-48c4-9538-a32589fdc88c
	I0229 18:18:33.509834   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:33.509842   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:33.510116   30631 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bwhnb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a3853502-49ad-4d24-8c63-3000e4f4aa8e","resourceVersion":"807","creationTimestamp":"2024-02-29T18:07:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"706ca334-c077-4f00-987d-5093fda91a27","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:07:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"706ca334-c077-4f00-987d-5093fda91a27\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0229 18:18:33.510512   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/multinode-051105
	I0229 18:18:33.510526   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:33.510533   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:33.510538   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:33.512469   30631 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0229 18:18:33.512489   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:33.512498   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:33.512502   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:33.512513   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:33 GMT
	I0229 18:18:33.512521   30631 round_trippers.go:580]     Audit-Id: 6540f98d-4838-451c-90ce-ff438670f4e8
	I0229 18:18:33.512523   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:33.512525   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:33.512691   30631 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-051105","uid":"614122aa-9203-4f41-a34b-07331562af09","resourceVersion":"922","creationTimestamp":"2024-02-29T18:06:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-051105","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-051105","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_07_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T18:06:59Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0229 18:18:34.008032   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bwhnb
	I0229 18:18:34.008054   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:34.008062   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:34.008065   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:34.010803   30631 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:18:34.010825   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:34.010833   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:33 GMT
	I0229 18:18:34.010839   30631 round_trippers.go:580]     Audit-Id: dd3c6918-aa5d-409d-a34d-25d359c6549e
	I0229 18:18:34.010843   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:34.010847   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:34.010851   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:34.010856   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:34.011419   30631 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bwhnb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a3853502-49ad-4d24-8c63-3000e4f4aa8e","resourceVersion":"807","creationTimestamp":"2024-02-29T18:07:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"706ca334-c077-4f00-987d-5093fda91a27","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:07:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"706ca334-c077-4f00-987d-5093fda91a27\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0229 18:18:34.011854   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/multinode-051105
	I0229 18:18:34.011870   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:34.011891   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:34.011898   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:34.014041   30631 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:18:34.014061   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:34.014070   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:34.014075   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:34.014080   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:33 GMT
	I0229 18:18:34.014084   30631 round_trippers.go:580]     Audit-Id: 542ec9f2-ad29-4464-b65f-be1ff556d9c6
	I0229 18:18:34.014089   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:34.014093   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:34.014376   30631 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-051105","uid":"614122aa-9203-4f41-a34b-07331562af09","resourceVersion":"922","creationTimestamp":"2024-02-29T18:06:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-051105","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-051105","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_07_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T18:06:59Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0229 18:18:34.508864   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bwhnb
	I0229 18:18:34.508883   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:34.508891   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:34.508896   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:34.511741   30631 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:18:34.511763   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:34.511772   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:34.511779   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:34 GMT
	I0229 18:18:34.511784   30631 round_trippers.go:580]     Audit-Id: 92593c77-9bbb-4023-ab23-100bc4f6ef78
	I0229 18:18:34.511788   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:34.511793   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:34.511798   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:34.512159   30631 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bwhnb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a3853502-49ad-4d24-8c63-3000e4f4aa8e","resourceVersion":"807","creationTimestamp":"2024-02-29T18:07:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"706ca334-c077-4f00-987d-5093fda91a27","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:07:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"706ca334-c077-4f00-987d-5093fda91a27\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0229 18:18:34.512589   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/multinode-051105
	I0229 18:18:34.512601   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:34.512609   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:34.512612   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:34.514509   30631 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0229 18:18:34.514527   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:34.514535   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:34 GMT
	I0229 18:18:34.514537   30631 round_trippers.go:580]     Audit-Id: 24cc78d2-a4cf-4dcd-bc3e-d79c23741ff0
	I0229 18:18:34.514540   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:34.514542   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:34.514545   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:34.514549   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:34.514891   30631 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-051105","uid":"614122aa-9203-4f41-a34b-07331562af09","resourceVersion":"922","creationTimestamp":"2024-02-29T18:06:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-051105","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-051105","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_07_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T18:06:59Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0229 18:18:35.008612   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bwhnb
	I0229 18:18:35.008636   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:35.008644   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:35.008648   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:35.011998   30631 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 18:18:35.012021   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:35.012031   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:35.012039   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:35.012043   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:34 GMT
	I0229 18:18:35.012046   30631 round_trippers.go:580]     Audit-Id: ee941d53-0e5f-4735-b19c-b9e293f2a25b
	I0229 18:18:35.012048   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:35.012051   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:35.012517   30631 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bwhnb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a3853502-49ad-4d24-8c63-3000e4f4aa8e","resourceVersion":"807","creationTimestamp":"2024-02-29T18:07:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"706ca334-c077-4f00-987d-5093fda91a27","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:07:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"706ca334-c077-4f00-987d-5093fda91a27\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0229 18:18:35.013013   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/multinode-051105
	I0229 18:18:35.013030   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:35.013036   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:35.013040   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:35.015281   30631 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:18:35.015302   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:35.015313   30631 round_trippers.go:580]     Audit-Id: 92871995-0e6c-4515-a0b9-c91c5b3ff4f9
	I0229 18:18:35.015318   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:35.015324   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:35.015328   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:35.015334   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:35.015339   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:34 GMT
	I0229 18:18:35.015653   30631 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-051105","uid":"614122aa-9203-4f41-a34b-07331562af09","resourceVersion":"922","creationTimestamp":"2024-02-29T18:06:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-051105","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-051105","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_07_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T18:06:59Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0229 18:18:35.508271   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bwhnb
	I0229 18:18:35.508296   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:35.508303   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:35.508307   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:35.511012   30631 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:18:35.511050   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:35.511060   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:35.511063   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:35.511067   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:35 GMT
	I0229 18:18:35.511072   30631 round_trippers.go:580]     Audit-Id: 0e9e65a1-5078-4ca4-8ecc-bb7f739d6c15
	I0229 18:18:35.511074   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:35.511076   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:35.511482   30631 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bwhnb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a3853502-49ad-4d24-8c63-3000e4f4aa8e","resourceVersion":"807","creationTimestamp":"2024-02-29T18:07:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"706ca334-c077-4f00-987d-5093fda91a27","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:07:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"706ca334-c077-4f00-987d-5093fda91a27\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0229 18:18:35.511888   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/multinode-051105
	I0229 18:18:35.511901   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:35.511907   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:35.511910   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:35.514137   30631 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:18:35.514152   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:35.514157   30631 round_trippers.go:580]     Audit-Id: 5df3e150-844d-4941-972a-68739fc272cf
	I0229 18:18:35.514161   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:35.514163   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:35.514165   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:35.514168   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:35.514170   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:35 GMT
	I0229 18:18:35.514515   30631 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-051105","uid":"614122aa-9203-4f41-a34b-07331562af09","resourceVersion":"922","creationTimestamp":"2024-02-29T18:06:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-051105","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-051105","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_07_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T18:06:59Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0229 18:18:35.514772   30631 pod_ready.go:102] pod "coredns-5dd5756b68-bwhnb" in "kube-system" namespace has status "Ready":"False"
	I0229 18:18:36.008110   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bwhnb
	I0229 18:18:36.008129   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:36.008137   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:36.008142   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:36.011632   30631 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 18:18:36.011653   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:36.011662   30631 round_trippers.go:580]     Audit-Id: 975ddeea-6a7b-44cd-81e4-1abe7f0ae818
	I0229 18:18:36.011667   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:36.011673   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:36.011677   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:36.011680   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:36.011683   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:35 GMT
	I0229 18:18:36.011838   30631 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bwhnb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a3853502-49ad-4d24-8c63-3000e4f4aa8e","resourceVersion":"807","creationTimestamp":"2024-02-29T18:07:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"706ca334-c077-4f00-987d-5093fda91a27","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:07:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"706ca334-c077-4f00-987d-5093fda91a27\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0229 18:18:36.012340   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/multinode-051105
	I0229 18:18:36.012354   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:36.012361   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:36.012365   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:36.014451   30631 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:18:36.014473   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:36.014484   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:36.014491   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:35 GMT
	I0229 18:18:36.014497   30631 round_trippers.go:580]     Audit-Id: b8783f8c-44dc-42c5-86ec-879b5dd7167a
	I0229 18:18:36.014502   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:36.014505   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:36.014509   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:36.014866   30631 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-051105","uid":"614122aa-9203-4f41-a34b-07331562af09","resourceVersion":"922","creationTimestamp":"2024-02-29T18:06:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-051105","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-051105","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_07_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T18:06:59Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0229 18:18:36.507959   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bwhnb
	I0229 18:18:36.507981   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:36.507989   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:36.507993   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:36.510850   30631 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:18:36.510874   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:36.510884   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:36.510890   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:36 GMT
	I0229 18:18:36.510896   30631 round_trippers.go:580]     Audit-Id: cae95809-ba30-4b2f-b0e2-2ea27f4abbd0
	I0229 18:18:36.510901   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:36.510905   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:36.510909   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:36.511091   30631 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bwhnb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a3853502-49ad-4d24-8c63-3000e4f4aa8e","resourceVersion":"807","creationTimestamp":"2024-02-29T18:07:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"706ca334-c077-4f00-987d-5093fda91a27","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:07:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"706ca334-c077-4f00-987d-5093fda91a27\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0229 18:18:36.511629   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/multinode-051105
	I0229 18:18:36.511644   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:36.511651   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:36.511657   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:36.514290   30631 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:18:36.514315   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:36.514345   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:36.514355   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:36.514365   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:36.514370   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:36.514375   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:36 GMT
	I0229 18:18:36.514379   30631 round_trippers.go:580]     Audit-Id: e525ff23-6635-46a2-a5b9-c77cf8fd55f8
	I0229 18:18:36.514571   30631 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-051105","uid":"614122aa-9203-4f41-a34b-07331562af09","resourceVersion":"922","creationTimestamp":"2024-02-29T18:06:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-051105","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-051105","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_07_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T18:06:59Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0229 18:18:37.008217   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bwhnb
	I0229 18:18:37.008246   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:37.008258   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:37.008264   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:37.011664   30631 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 18:18:37.011683   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:37.011691   30631 round_trippers.go:580]     Audit-Id: 0559b373-cf65-45fa-951a-b9e93d4baafe
	I0229 18:18:37.011713   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:37.011720   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:37.011724   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:37.011728   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:37.011732   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:36 GMT
	I0229 18:18:37.012006   30631 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bwhnb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a3853502-49ad-4d24-8c63-3000e4f4aa8e","resourceVersion":"807","creationTimestamp":"2024-02-29T18:07:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"706ca334-c077-4f00-987d-5093fda91a27","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:07:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"706ca334-c077-4f00-987d-5093fda91a27\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0229 18:18:37.012417   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/multinode-051105
	I0229 18:18:37.012429   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:37.012436   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:37.012439   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:37.015150   30631 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:18:37.015169   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:37.015178   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:37.015184   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:37.015190   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:36 GMT
	I0229 18:18:37.015193   30631 round_trippers.go:580]     Audit-Id: e4f6f335-6a05-4c62-8f33-fa774a52e339
	I0229 18:18:37.015196   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:37.015200   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:37.015415   30631 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-051105","uid":"614122aa-9203-4f41-a34b-07331562af09","resourceVersion":"922","creationTimestamp":"2024-02-29T18:06:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-051105","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-051105","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_07_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T18:06:59Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0229 18:18:37.508064   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bwhnb
	I0229 18:18:37.508093   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:37.508103   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:37.508108   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:37.511457   30631 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 18:18:37.511475   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:37.511484   30631 round_trippers.go:580]     Audit-Id: e27561bc-ac38-468b-b1f8-3b6478104ca6
	I0229 18:18:37.511490   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:37.511495   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:37.511501   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:37.511507   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:37.511511   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:37 GMT
	I0229 18:18:37.511898   30631 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bwhnb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a3853502-49ad-4d24-8c63-3000e4f4aa8e","resourceVersion":"807","creationTimestamp":"2024-02-29T18:07:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"706ca334-c077-4f00-987d-5093fda91a27","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:07:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"706ca334-c077-4f00-987d-5093fda91a27\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0229 18:18:37.512292   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/multinode-051105
	I0229 18:18:37.512307   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:37.512314   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:37.512317   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:37.515089   30631 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:18:37.515100   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:37.515108   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:37 GMT
	I0229 18:18:37.515113   30631 round_trippers.go:580]     Audit-Id: 021f44c8-c244-45d2-8131-54040b81f17a
	I0229 18:18:37.515120   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:37.515125   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:37.515133   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:37.515138   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:37.515526   30631 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-051105","uid":"614122aa-9203-4f41-a34b-07331562af09","resourceVersion":"922","creationTimestamp":"2024-02-29T18:06:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-051105","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-051105","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_07_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T18:06:59Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0229 18:18:37.515810   30631 pod_ready.go:102] pod "coredns-5dd5756b68-bwhnb" in "kube-system" namespace has status "Ready":"False"
	I0229 18:18:38.008161   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bwhnb
	I0229 18:18:38.008181   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:38.008188   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:38.008194   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:38.011097   30631 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:18:38.011117   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:38.011125   30631 round_trippers.go:580]     Audit-Id: 71f2f0b4-5ea6-473b-8f69-93414460db44
	I0229 18:18:38.011132   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:38.011136   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:38.011140   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:38.011145   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:38.011150   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:37 GMT
	I0229 18:18:38.011762   30631 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bwhnb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a3853502-49ad-4d24-8c63-3000e4f4aa8e","resourceVersion":"807","creationTimestamp":"2024-02-29T18:07:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"706ca334-c077-4f00-987d-5093fda91a27","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:07:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"706ca334-c077-4f00-987d-5093fda91a27\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0229 18:18:38.012175   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/multinode-051105
	I0229 18:18:38.012190   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:38.012200   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:38.012205   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:38.014286   30631 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:18:38.014306   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:38.014315   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:38.014321   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:38.014327   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:38.014331   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:37 GMT
	I0229 18:18:38.014335   30631 round_trippers.go:580]     Audit-Id: 54876ab5-7514-4bbe-af90-9073579facc4
	I0229 18:18:38.014340   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:38.014627   30631 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-051105","uid":"614122aa-9203-4f41-a34b-07331562af09","resourceVersion":"922","creationTimestamp":"2024-02-29T18:06:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-051105","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-051105","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_07_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T18:06:59Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0229 18:18:38.508598   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bwhnb
	I0229 18:18:38.508625   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:38.508636   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:38.508643   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:38.510874   30631 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:18:38.510902   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:38.510910   30631 round_trippers.go:580]     Audit-Id: b7060292-7e84-476e-9ec1-2beefe023948
	I0229 18:18:38.510931   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:38.510938   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:38.510944   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:38.510949   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:38.510954   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:38 GMT
	I0229 18:18:38.511104   30631 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bwhnb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a3853502-49ad-4d24-8c63-3000e4f4aa8e","resourceVersion":"807","creationTimestamp":"2024-02-29T18:07:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"706ca334-c077-4f00-987d-5093fda91a27","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:07:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"706ca334-c077-4f00-987d-5093fda91a27\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0229 18:18:38.511547   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/multinode-051105
	I0229 18:18:38.511565   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:38.511574   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:38.511580   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:38.513485   30631 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0229 18:18:38.513499   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:38.513505   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:38.513508   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:38 GMT
	I0229 18:18:38.513512   30631 round_trippers.go:580]     Audit-Id: 6232a663-cf43-4d2b-90fe-17f419f82bce
	I0229 18:18:38.513515   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:38.513518   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:38.513520   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:38.513714   30631 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-051105","uid":"614122aa-9203-4f41-a34b-07331562af09","resourceVersion":"922","creationTimestamp":"2024-02-29T18:06:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-051105","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-051105","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_07_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T18:06:59Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0229 18:18:39.008374   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bwhnb
	I0229 18:18:39.008400   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:39.008417   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:39.008421   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:39.011416   30631 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:18:39.011441   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:39.011450   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:38 GMT
	I0229 18:18:39.011454   30631 round_trippers.go:580]     Audit-Id: 929513c6-e2d0-4ced-84a8-ff328abf2efe
	I0229 18:18:39.011458   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:39.011462   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:39.011465   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:39.011471   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:39.011676   30631 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bwhnb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a3853502-49ad-4d24-8c63-3000e4f4aa8e","resourceVersion":"807","creationTimestamp":"2024-02-29T18:07:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"706ca334-c077-4f00-987d-5093fda91a27","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:07:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"706ca334-c077-4f00-987d-5093fda91a27\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0229 18:18:39.012086   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/multinode-051105
	I0229 18:18:39.012098   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:39.012105   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:39.012108   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:39.014392   30631 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:18:39.014407   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:39.014418   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:38 GMT
	I0229 18:18:39.014421   30631 round_trippers.go:580]     Audit-Id: 11a51694-eb94-48a9-b9c6-1c5a553dae6c
	I0229 18:18:39.014425   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:39.014429   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:39.014432   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:39.014436   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:39.014700   30631 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-051105","uid":"614122aa-9203-4f41-a34b-07331562af09","resourceVersion":"922","creationTimestamp":"2024-02-29T18:06:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-051105","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-051105","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_07_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T18:06:59Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0229 18:18:39.508313   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bwhnb
	I0229 18:18:39.508335   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:39.508353   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:39.508357   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:39.511377   30631 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 18:18:39.511404   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:39.511412   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:39.511417   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:39.511421   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:39 GMT
	I0229 18:18:39.511423   30631 round_trippers.go:580]     Audit-Id: 2c889872-e6a3-453a-80f8-6c648b7ac91f
	I0229 18:18:39.511426   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:39.511429   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:39.511636   30631 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bwhnb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a3853502-49ad-4d24-8c63-3000e4f4aa8e","resourceVersion":"807","creationTimestamp":"2024-02-29T18:07:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"706ca334-c077-4f00-987d-5093fda91a27","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:07:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"706ca334-c077-4f00-987d-5093fda91a27\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0229 18:18:39.512060   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/multinode-051105
	I0229 18:18:39.512074   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:39.512081   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:39.512085   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:39.513926   30631 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0229 18:18:39.513938   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:39.513948   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:39 GMT
	I0229 18:18:39.513952   30631 round_trippers.go:580]     Audit-Id: 2f39f343-92fa-454a-af82-7a50186788fb
	I0229 18:18:39.513954   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:39.513960   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:39.513964   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:39.513968   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:39.514141   30631 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-051105","uid":"614122aa-9203-4f41-a34b-07331562af09","resourceVersion":"922","creationTimestamp":"2024-02-29T18:06:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-051105","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-051105","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_07_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T18:06:59Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0229 18:18:40.008938   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bwhnb
	I0229 18:18:40.008969   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:40.008980   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:40.008988   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:40.013923   30631 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 18:18:40.013946   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:40.013958   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:40.013964   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:40.013968   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:40.013973   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:40.013991   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:39 GMT
	I0229 18:18:40.013995   30631 round_trippers.go:580]     Audit-Id: a1e05feb-d961-4a9e-b382-419735cebe4b
	I0229 18:18:40.014882   30631 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bwhnb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a3853502-49ad-4d24-8c63-3000e4f4aa8e","resourceVersion":"807","creationTimestamp":"2024-02-29T18:07:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"706ca334-c077-4f00-987d-5093fda91a27","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:07:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"706ca334-c077-4f00-987d-5093fda91a27\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0229 18:18:40.015434   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/multinode-051105
	I0229 18:18:40.015460   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:40.015471   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:40.015481   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:40.017480   30631 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0229 18:18:40.017498   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:40.017506   30631 round_trippers.go:580]     Audit-Id: dd3ec3d5-ee47-4eef-a106-87f1127f1e20
	I0229 18:18:40.017510   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:40.017516   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:40.017521   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:40.017525   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:40.017530   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:39 GMT
	I0229 18:18:40.017712   30631 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-051105","uid":"614122aa-9203-4f41-a34b-07331562af09","resourceVersion":"922","creationTimestamp":"2024-02-29T18:06:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-051105","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-051105","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_07_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T18:06:59Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0229 18:18:40.018106   30631 pod_ready.go:102] pod "coredns-5dd5756b68-bwhnb" in "kube-system" namespace has status "Ready":"False"
	I0229 18:18:40.508207   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bwhnb
	I0229 18:18:40.508226   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:40.508234   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:40.508239   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:40.510880   30631 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:18:40.510899   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:40.510908   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:40.510914   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:40 GMT
	I0229 18:18:40.510919   30631 round_trippers.go:580]     Audit-Id: b14b3941-8de4-463e-b9d1-29494191e801
	I0229 18:18:40.510923   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:40.510927   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:40.510930   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:40.511120   30631 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bwhnb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a3853502-49ad-4d24-8c63-3000e4f4aa8e","resourceVersion":"807","creationTimestamp":"2024-02-29T18:07:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"706ca334-c077-4f00-987d-5093fda91a27","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:07:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"706ca334-c077-4f00-987d-5093fda91a27\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0229 18:18:40.511743   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/multinode-051105
	I0229 18:18:40.511769   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:40.511780   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:40.511792   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:40.514010   30631 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:18:40.514024   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:40.514031   30631 round_trippers.go:580]     Audit-Id: 548e3a45-730c-42d5-b69c-bfb16a77d332
	I0229 18:18:40.514037   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:40.514040   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:40.514044   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:40.514050   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:40.514056   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:40 GMT
	I0229 18:18:40.514228   30631 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-051105","uid":"614122aa-9203-4f41-a34b-07331562af09","resourceVersion":"922","creationTimestamp":"2024-02-29T18:06:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-051105","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-051105","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_07_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T18:06:59Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0229 18:18:41.008976   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bwhnb
	I0229 18:18:41.009022   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:41.009032   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:41.009038   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:41.012897   30631 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 18:18:41.012923   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:41.012932   30631 round_trippers.go:580]     Audit-Id: 256270e1-5cb0-43db-bfcf-2f8eb33d81e4
	I0229 18:18:41.012936   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:41.012939   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:41.012948   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:41.012952   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:41.012955   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:40 GMT
	I0229 18:18:41.013421   30631 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bwhnb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a3853502-49ad-4d24-8c63-3000e4f4aa8e","resourceVersion":"807","creationTimestamp":"2024-02-29T18:07:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"706ca334-c077-4f00-987d-5093fda91a27","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:07:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"706ca334-c077-4f00-987d-5093fda91a27\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0229 18:18:41.013967   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/multinode-051105
	I0229 18:18:41.013983   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:41.013994   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:41.014000   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:41.016376   30631 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:18:41.016399   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:41.016407   30631 round_trippers.go:580]     Audit-Id: e32c6045-0da4-4c00-8784-9df4d05d59ae
	I0229 18:18:41.016412   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:41.016416   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:41.016421   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:41.016431   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:41.016436   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:40 GMT
	I0229 18:18:41.016719   30631 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-051105","uid":"614122aa-9203-4f41-a34b-07331562af09","resourceVersion":"922","creationTimestamp":"2024-02-29T18:06:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-051105","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-051105","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_07_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T18:06:59Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0229 18:18:41.507943   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bwhnb
	I0229 18:18:41.507968   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:41.507992   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:41.507996   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:41.511395   30631 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 18:18:41.511417   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:41.511427   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:41 GMT
	I0229 18:18:41.511432   30631 round_trippers.go:580]     Audit-Id: a1f5c7b9-647a-4934-b159-619124f191b9
	I0229 18:18:41.511438   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:41.511442   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:41.511445   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:41.511449   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:41.512147   30631 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bwhnb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a3853502-49ad-4d24-8c63-3000e4f4aa8e","resourceVersion":"807","creationTimestamp":"2024-02-29T18:07:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"706ca334-c077-4f00-987d-5093fda91a27","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:07:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"706ca334-c077-4f00-987d-5093fda91a27\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0229 18:18:41.512567   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/multinode-051105
	I0229 18:18:41.512580   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:41.512587   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:41.512590   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:41.514752   30631 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:18:41.514781   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:41.514786   30631 round_trippers.go:580]     Audit-Id: d6fe3fa1-3245-45ad-b16e-4d124ad1d825
	I0229 18:18:41.514789   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:41.514791   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:41.514794   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:41.514796   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:41.514804   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:41 GMT
	I0229 18:18:41.514999   30631 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-051105","uid":"614122aa-9203-4f41-a34b-07331562af09","resourceVersion":"922","creationTimestamp":"2024-02-29T18:06:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-051105","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-051105","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_07_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T18:06:59Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0229 18:18:42.008325   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bwhnb
	I0229 18:18:42.008352   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:42.008362   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:42.008367   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:42.011438   30631 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 18:18:42.011464   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:42.011471   30631 round_trippers.go:580]     Audit-Id: a84d1513-7041-4ad1-a5f6-a1800f4869a1
	I0229 18:18:42.011475   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:42.011479   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:42.011483   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:42.011487   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:42.011490   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:41 GMT
	I0229 18:18:42.011677   30631 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bwhnb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a3853502-49ad-4d24-8c63-3000e4f4aa8e","resourceVersion":"807","creationTimestamp":"2024-02-29T18:07:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"706ca334-c077-4f00-987d-5093fda91a27","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:07:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"706ca334-c077-4f00-987d-5093fda91a27\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0229 18:18:42.012196   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/multinode-051105
	I0229 18:18:42.012213   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:42.012224   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:42.012229   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:42.014454   30631 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:18:42.014471   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:42.014480   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:42.014487   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:41 GMT
	I0229 18:18:42.014490   30631 round_trippers.go:580]     Audit-Id: 8dc58d0f-33af-4c37-9e1e-c0ce9700366b
	I0229 18:18:42.014493   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:42.014497   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:42.014499   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:42.014636   30631 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-051105","uid":"614122aa-9203-4f41-a34b-07331562af09","resourceVersion":"922","creationTimestamp":"2024-02-29T18:06:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-051105","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-051105","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_07_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T18:06:59Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0229 18:18:42.508196   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bwhnb
	I0229 18:18:42.508218   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:42.508225   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:42.508240   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:42.510836   30631 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:18:42.510858   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:42.510868   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:42.510873   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:42.510876   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:42.510879   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:42.510883   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:42 GMT
	I0229 18:18:42.510885   30631 round_trippers.go:580]     Audit-Id: cf4509ac-0b03-494d-9f00-502bf3ad3175
	I0229 18:18:42.511606   30631 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bwhnb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a3853502-49ad-4d24-8c63-3000e4f4aa8e","resourceVersion":"807","creationTimestamp":"2024-02-29T18:07:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"706ca334-c077-4f00-987d-5093fda91a27","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:07:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"706ca334-c077-4f00-987d-5093fda91a27\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0229 18:18:42.512147   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/multinode-051105
	I0229 18:18:42.512166   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:42.512176   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:42.512182   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:42.514504   30631 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:18:42.514524   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:42.514532   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:42.514538   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:42.514542   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:42 GMT
	I0229 18:18:42.514546   30631 round_trippers.go:580]     Audit-Id: 1103d429-893e-4a97-96fc-7a0606ce7b5e
	I0229 18:18:42.514551   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:42.514556   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:42.514719   30631 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-051105","uid":"614122aa-9203-4f41-a34b-07331562af09","resourceVersion":"922","creationTimestamp":"2024-02-29T18:06:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-051105","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-051105","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_07_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T18:06:59Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0229 18:18:42.515102   30631 pod_ready.go:102] pod "coredns-5dd5756b68-bwhnb" in "kube-system" namespace has status "Ready":"False"
	I0229 18:18:43.008362   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bwhnb
	I0229 18:18:43.008396   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:43.008408   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:43.008421   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:43.011170   30631 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:18:43.011191   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:43.011198   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:43.011202   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:43.011204   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:43.011207   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:43.011216   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:42 GMT
	I0229 18:18:43.011218   30631 round_trippers.go:580]     Audit-Id: a2dafa3f-861c-45f9-825a-4bd40f522ee2
	I0229 18:18:43.011643   30631 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bwhnb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a3853502-49ad-4d24-8c63-3000e4f4aa8e","resourceVersion":"807","creationTimestamp":"2024-02-29T18:07:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"706ca334-c077-4f00-987d-5093fda91a27","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:07:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"706ca334-c077-4f00-987d-5093fda91a27\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0229 18:18:43.012049   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/multinode-051105
	I0229 18:18:43.012064   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:43.012070   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:43.012074   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:43.014314   30631 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:18:43.014333   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:43.014342   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:43.014352   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:43.014358   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:43.014362   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:43.014367   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:42 GMT
	I0229 18:18:43.014371   30631 round_trippers.go:580]     Audit-Id: ac0c997f-a62a-49e7-9040-b5ef2693acd3
	I0229 18:18:43.014707   30631 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-051105","uid":"614122aa-9203-4f41-a34b-07331562af09","resourceVersion":"922","creationTimestamp":"2024-02-29T18:06:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-051105","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-051105","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_07_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T18:06:59Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0229 18:18:43.508234   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bwhnb
	I0229 18:18:43.508256   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:43.508264   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:43.508267   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:43.510918   30631 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:18:43.510935   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:43.510942   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:43 GMT
	I0229 18:18:43.510946   30631 round_trippers.go:580]     Audit-Id: b0dab60e-9dac-4afb-bca4-be03a7e6a470
	I0229 18:18:43.510950   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:43.510954   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:43.510960   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:43.510964   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:43.511220   30631 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bwhnb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a3853502-49ad-4d24-8c63-3000e4f4aa8e","resourceVersion":"807","creationTimestamp":"2024-02-29T18:07:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"706ca334-c077-4f00-987d-5093fda91a27","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:07:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"706ca334-c077-4f00-987d-5093fda91a27\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0229 18:18:43.511669   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/multinode-051105
	I0229 18:18:43.511685   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:43.511695   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:43.511700   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:43.529869   30631 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0229 18:18:43.529896   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:43.529907   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:43.529913   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:43 GMT
	I0229 18:18:43.529917   30631 round_trippers.go:580]     Audit-Id: 085abc6f-677e-4caa-a149-702e444deeff
	I0229 18:18:43.529923   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:43.529944   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:43.529952   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:43.530108   30631 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-051105","uid":"614122aa-9203-4f41-a34b-07331562af09","resourceVersion":"922","creationTimestamp":"2024-02-29T18:06:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-051105","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-051105","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_07_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T18:06:59Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0229 18:18:44.008734   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bwhnb
	I0229 18:18:44.008762   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:44.008775   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:44.008779   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:44.011767   30631 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:18:44.011787   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:44.011794   30631 round_trippers.go:580]     Audit-Id: 8da4e99e-eba2-4053-9f0f-5be646bc6a6a
	I0229 18:18:44.011799   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:44.011817   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:44.011819   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:44.011822   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:44.011825   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:43 GMT
	I0229 18:18:44.012137   30631 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bwhnb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a3853502-49ad-4d24-8c63-3000e4f4aa8e","resourceVersion":"954","creationTimestamp":"2024-02-29T18:07:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"706ca334-c077-4f00-987d-5093fda91a27","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:07:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"706ca334-c077-4f00-987d-5093fda91a27\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6455 chars]
	I0229 18:18:44.012555   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/multinode-051105
	I0229 18:18:44.012567   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:44.012574   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:44.012577   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:44.015109   30631 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:18:44.015127   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:44.015144   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:44.015150   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:44.015158   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:43 GMT
	I0229 18:18:44.015166   30631 round_trippers.go:580]     Audit-Id: 03875acd-af0f-4928-b70c-2da7d9fe361f
	I0229 18:18:44.015170   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:44.015173   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:44.015470   30631 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-051105","uid":"614122aa-9203-4f41-a34b-07331562af09","resourceVersion":"922","creationTimestamp":"2024-02-29T18:06:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-051105","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-051105","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_07_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T18:06:59Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0229 18:18:44.508057   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bwhnb
	I0229 18:18:44.508077   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:44.508084   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:44.508088   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:44.510658   30631 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:18:44.510683   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:44.510691   30631 round_trippers.go:580]     Audit-Id: 01fa8167-ed86-4ff7-bebd-5d66b7120331
	I0229 18:18:44.510695   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:44.510698   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:44.510701   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:44.510705   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:44.510707   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:44 GMT
	I0229 18:18:44.510940   30631 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bwhnb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a3853502-49ad-4d24-8c63-3000e4f4aa8e","resourceVersion":"954","creationTimestamp":"2024-02-29T18:07:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"706ca334-c077-4f00-987d-5093fda91a27","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:07:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"706ca334-c077-4f00-987d-5093fda91a27\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6455 chars]
	I0229 18:18:44.511385   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/multinode-051105
	I0229 18:18:44.511398   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:44.511403   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:44.511406   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:44.513812   30631 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:18:44.513826   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:44.513830   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:44 GMT
	I0229 18:18:44.513841   30631 round_trippers.go:580]     Audit-Id: d29c24c7-518b-4d13-849c-a09c692574bb
	I0229 18:18:44.513847   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:44.513851   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:44.513855   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:44.513859   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:44.514176   30631 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-051105","uid":"614122aa-9203-4f41-a34b-07331562af09","resourceVersion":"922","creationTimestamp":"2024-02-29T18:06:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-051105","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-051105","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_07_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T18:06:59Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0229 18:18:45.008975   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bwhnb
	I0229 18:18:45.009007   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:45.009018   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:45.009025   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:45.012245   30631 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 18:18:45.012267   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:45.012274   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:45.012278   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:45.012282   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:45.012286   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:44 GMT
	I0229 18:18:45.012291   30631 round_trippers.go:580]     Audit-Id: 502e877a-0ad2-4072-8526-c578a06495f4
	I0229 18:18:45.012295   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:45.012599   30631 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bwhnb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a3853502-49ad-4d24-8c63-3000e4f4aa8e","resourceVersion":"958","creationTimestamp":"2024-02-29T18:07:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"706ca334-c077-4f00-987d-5093fda91a27","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:07:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"706ca334-c077-4f00-987d-5093fda91a27\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6226 chars]
	I0229 18:18:45.013010   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/multinode-051105
	I0229 18:18:45.013024   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:45.013030   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:45.013040   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:45.015141   30631 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:18:45.015159   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:45.015168   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:45.015176   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:44 GMT
	I0229 18:18:45.015181   30631 round_trippers.go:580]     Audit-Id: 090a08b6-0260-4cef-8324-61f202352ede
	I0229 18:18:45.015185   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:45.015189   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:45.015193   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:45.015425   30631 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-051105","uid":"614122aa-9203-4f41-a34b-07331562af09","resourceVersion":"922","creationTimestamp":"2024-02-29T18:06:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-051105","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-051105","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_07_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T18:06:59Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0229 18:18:45.015811   30631 pod_ready.go:92] pod "coredns-5dd5756b68-bwhnb" in "kube-system" namespace has status "Ready":"True"
	I0229 18:18:45.015829   30631 pod_ready.go:81] duration metric: took 11.508039022s waiting for pod "coredns-5dd5756b68-bwhnb" in "kube-system" namespace to be "Ready" ...
	I0229 18:18:45.015840   30631 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-051105" in "kube-system" namespace to be "Ready" ...
	I0229 18:18:45.015926   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-051105
	I0229 18:18:45.015936   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:45.015943   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:45.015950   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:45.018062   30631 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:18:45.018083   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:45.018092   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:45.018097   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:45.018103   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:45.018112   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:45.018118   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:44 GMT
	I0229 18:18:45.018124   30631 round_trippers.go:580]     Audit-Id: 028c928b-bba9-4880-9065-d3ddadd37246
	I0229 18:18:45.018286   30631 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-051105","namespace":"kube-system","uid":"e73d8125-9770-4ddf-a382-a19adc1ed94f","resourceVersion":"948","creationTimestamp":"2024-02-29T18:07:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.200:2379","kubernetes.io/config.hash":"a3ee17954369c56d68a333413809975f","kubernetes.io/config.mirror":"a3ee17954369c56d68a333413809975f","kubernetes.io/config.seen":"2024-02-29T18:06:55.285569285Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-051105","uid":"614122aa-9203-4f41-a34b-07331562af09","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:07:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5825 chars]
	I0229 18:18:45.018678   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/multinode-051105
	I0229 18:18:45.018692   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:45.018702   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:45.018709   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:45.020617   30631 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0229 18:18:45.020632   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:45.020641   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:45.020646   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:44 GMT
	I0229 18:18:45.020650   30631 round_trippers.go:580]     Audit-Id: 550d5c62-a619-44fc-ad46-8df749ab721c
	I0229 18:18:45.020656   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:45.020672   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:45.020676   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:45.020955   30631 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-051105","uid":"614122aa-9203-4f41-a34b-07331562af09","resourceVersion":"922","creationTimestamp":"2024-02-29T18:06:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-051105","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-051105","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_07_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T18:06:59Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0229 18:18:45.021233   30631 pod_ready.go:92] pod "etcd-multinode-051105" in "kube-system" namespace has status "Ready":"True"
	I0229 18:18:45.021250   30631 pod_ready.go:81] duration metric: took 5.397858ms waiting for pod "etcd-multinode-051105" in "kube-system" namespace to be "Ready" ...
	I0229 18:18:45.021278   30631 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-051105" in "kube-system" namespace to be "Ready" ...
	I0229 18:18:45.021351   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-051105
	I0229 18:18:45.021360   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:45.021369   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:45.021377   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:45.024435   30631 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 18:18:45.024449   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:45.024455   30631 round_trippers.go:580]     Audit-Id: a8e70bdf-ba88-4be2-978f-b27e28333ba3
	I0229 18:18:45.024458   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:45.024464   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:45.024468   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:45.024472   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:45.024477   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:44 GMT
	I0229 18:18:45.025112   30631 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-051105","namespace":"kube-system","uid":"722abb81-d303-4fa9-bcbb-8c16aaf4421d","resourceVersion":"925","creationTimestamp":"2024-02-29T18:07:02Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.200:8443","kubernetes.io/config.hash":"716aea331c832180bd818bead2d6fe09","kubernetes.io/config.mirror":"716aea331c832180bd818bead2d6fe09","kubernetes.io/config.seen":"2024-02-29T18:07:02.423715355Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-051105","uid":"614122aa-9203-4f41-a34b-07331562af09","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:07:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7351 chars]
	I0229 18:18:45.025600   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/multinode-051105
	I0229 18:18:45.025617   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:45.025626   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:45.025636   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:45.027818   30631 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:18:45.027840   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:45.027849   30631 round_trippers.go:580]     Audit-Id: 2f0de20a-7488-43fc-a9ed-716c7505440c
	I0229 18:18:45.027853   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:45.027855   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:45.027859   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:45.027863   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:45.027866   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:45 GMT
	I0229 18:18:45.028024   30631 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-051105","uid":"614122aa-9203-4f41-a34b-07331562af09","resourceVersion":"922","creationTimestamp":"2024-02-29T18:06:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-051105","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-051105","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_07_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T18:06:59Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0229 18:18:45.028332   30631 pod_ready.go:92] pod "kube-apiserver-multinode-051105" in "kube-system" namespace has status "Ready":"True"
	I0229 18:18:45.028348   30631 pod_ready.go:81] duration metric: took 7.058972ms waiting for pod "kube-apiserver-multinode-051105" in "kube-system" namespace to be "Ready" ...
	I0229 18:18:45.028358   30631 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-051105" in "kube-system" namespace to be "Ready" ...
	I0229 18:18:45.028416   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-051105
	I0229 18:18:45.028431   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:45.028441   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:45.028449   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:45.031174   30631 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:18:45.031186   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:45.031200   30631 round_trippers.go:580]     Audit-Id: 78a91004-7dfc-47de-b5a9-6115682f31a9
	I0229 18:18:45.031207   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:45.031210   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:45.031213   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:45.031215   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:45.031219   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:45 GMT
	I0229 18:18:45.031792   30631 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-051105","namespace":"kube-system","uid":"a3156cba-a585-47c6-8b26-2069af0021ce","resourceVersion":"929","creationTimestamp":"2024-02-29T18:07:00Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"12776d77f75f6cff787ef977dae61db7","kubernetes.io/config.mirror":"12776d77f75f6cff787ef977dae61db7","kubernetes.io/config.seen":"2024-02-29T18:06:55.285572192Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-051105","uid":"614122aa-9203-4f41-a34b-07331562af09","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:07:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6907 chars]
	I0229 18:18:45.032286   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/multinode-051105
	I0229 18:18:45.032301   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:45.032311   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:45.032319   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:45.034672   30631 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:18:45.034685   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:45.034690   30631 round_trippers.go:580]     Audit-Id: d30f0f97-8218-4954-ae1f-a06f62ddfcf5
	I0229 18:18:45.034695   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:45.034703   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:45.034712   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:45.034720   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:45.034727   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:45 GMT
	I0229 18:18:45.035122   30631 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-051105","uid":"614122aa-9203-4f41-a34b-07331562af09","resourceVersion":"922","creationTimestamp":"2024-02-29T18:06:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-051105","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-051105","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_07_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T18:06:59Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0229 18:18:45.035470   30631 pod_ready.go:92] pod "kube-controller-manager-multinode-051105" in "kube-system" namespace has status "Ready":"True"
	I0229 18:18:45.035495   30631 pod_ready.go:81] duration metric: took 7.117292ms waiting for pod "kube-controller-manager-multinode-051105" in "kube-system" namespace to be "Ready" ...
	I0229 18:18:45.035512   30631 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cbl8s" in "kube-system" namespace to be "Ready" ...
	I0229 18:18:45.035563   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cbl8s
	I0229 18:18:45.035572   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:45.035580   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:45.035590   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:45.037739   30631 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:18:45.037754   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:45.037763   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:45.037769   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:45.037773   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:45.037776   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:45.037778   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:45 GMT
	I0229 18:18:45.037781   30631 round_trippers.go:580]     Audit-Id: 670ae973-a952-4b97-a2fd-9899215984eb
	I0229 18:18:45.037983   30631 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-cbl8s","generateName":"kube-proxy-","namespace":"kube-system","uid":"352ba5ff-0a79-4766-8a3f-a0860aad1b91","resourceVersion":"574","creationTimestamp":"2024-02-29T18:09:08Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"811deb55-d749-4c76-9949-4d9e40cf5500","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:09:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"811deb55-d749-4c76-9949-4d9e40cf5500\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5492 chars]
	I0229 18:18:45.038444   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/multinode-051105-m02
	I0229 18:18:45.038460   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:45.038468   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:45.038471   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:45.040683   30631 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:18:45.040703   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:45.040712   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:45.040717   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:45.040724   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:45 GMT
	I0229 18:18:45.040730   30631 round_trippers.go:580]     Audit-Id: 18847bcd-658f-4e1f-9664-84ba8af6cc69
	I0229 18:18:45.040736   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:45.040740   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:45.040878   30631 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-051105-m02","uid":"d9c0ff3f-8bc0-4054-a484-27b1793b2e4e","resourceVersion":"818","creationTimestamp":"2024-02-29T18:09:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-051105-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-051105","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T18_10_38_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:09:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 4236 chars]
	I0229 18:18:45.041185   30631 pod_ready.go:92] pod "kube-proxy-cbl8s" in "kube-system" namespace has status "Ready":"True"
	I0229 18:18:45.041202   30631 pod_ready.go:81] duration metric: took 5.677867ms waiting for pod "kube-proxy-cbl8s" in "kube-system" namespace to be "Ready" ...
	I0229 18:18:45.041214   30631 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jfw9f" in "kube-system" namespace to be "Ready" ...
	I0229 18:18:45.209613   30631 request.go:629] Waited for 168.346327ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jfw9f
	I0229 18:18:45.209702   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jfw9f
	I0229 18:18:45.209711   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:45.209718   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:45.209722   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:45.213456   30631 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 18:18:45.213478   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:45.213488   30631 round_trippers.go:580]     Audit-Id: 52d34ec4-ba78-49a0-bba2-1eaedccebd05
	I0229 18:18:45.213496   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:45.213503   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:45.213507   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:45.213512   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:45.213519   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:45 GMT
	I0229 18:18:45.213817   30631 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-jfw9f","generateName":"kube-proxy-","namespace":"kube-system","uid":"45e1b79c-2d6b-4169-a6f0-a3949ec4bc6f","resourceVersion":"780","creationTimestamp":"2024-02-29T18:09:56Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"811deb55-d749-4c76-9949-4d9e40cf5500","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:09:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"811deb55-d749-4c76-9949-4d9e40cf5500\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5488 chars]
	I0229 18:18:45.409590   30631 request.go:629] Waited for 195.35564ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/nodes/multinode-051105-m03
	I0229 18:18:45.409678   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/multinode-051105-m03
	I0229 18:18:45.409690   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:45.409700   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:45.409705   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:45.412484   30631 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:18:45.412505   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:45.412515   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:45.412522   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:45.412530   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:45.412534   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:45 GMT
	I0229 18:18:45.412538   30631 round_trippers.go:580]     Audit-Id: 073f1a8e-d720-4dbc-bfa9-0662fa85aa4f
	I0229 18:18:45.412542   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:45.412662   30631 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-051105-m03","uid":"2aa133ce-8b37-4464-acdc-adffba00e813","resourceVersion":"936","creationTimestamp":"2024-02-29T18:10:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-051105-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-051105","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T18_10_38_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:10:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 3964 chars]
	I0229 18:18:45.413061   30631 pod_ready.go:92] pod "kube-proxy-jfw9f" in "kube-system" namespace has status "Ready":"True"
	I0229 18:18:45.413084   30631 pod_ready.go:81] duration metric: took 371.856239ms waiting for pod "kube-proxy-jfw9f" in "kube-system" namespace to be "Ready" ...
	I0229 18:18:45.413094   30631 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wvhlx" in "kube-system" namespace to be "Ready" ...
	I0229 18:18:45.608988   30631 request.go:629] Waited for 195.83946ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wvhlx
	I0229 18:18:45.609070   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wvhlx
	I0229 18:18:45.609076   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:45.609083   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:45.609086   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:45.612376   30631 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 18:18:45.612393   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:45.612400   30631 round_trippers.go:580]     Audit-Id: cb754d97-2207-4010-b07b-f56d5c0592fa
	I0229 18:18:45.612403   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:45.612407   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:45.612410   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:45.612414   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:45.612416   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:45 GMT
	I0229 18:18:45.613224   30631 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-wvhlx","generateName":"kube-proxy-","namespace":"kube-system","uid":"5548dfdd-2cda-48bc-9359-95eda53437d4","resourceVersion":"814","creationTimestamp":"2024-02-29T18:07:14Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"811deb55-d749-4c76-9949-4d9e40cf5500","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:07:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"811deb55-d749-4c76-9949-4d9e40cf5500\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5484 chars]
	I0229 18:18:45.810012   30631 request.go:629] Waited for 196.382117ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/nodes/multinode-051105
	I0229 18:18:45.810075   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/multinode-051105
	I0229 18:18:45.810083   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:45.810094   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:45.810103   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:45.813029   30631 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:18:45.813052   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:45.813062   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:45.813069   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:45.813073   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:45.813078   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:45 GMT
	I0229 18:18:45.813100   30631 round_trippers.go:580]     Audit-Id: 8f2fb089-822b-4554-a2be-71a2eaf577c8
	I0229 18:18:45.813108   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:45.813363   30631 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-051105","uid":"614122aa-9203-4f41-a34b-07331562af09","resourceVersion":"922","creationTimestamp":"2024-02-29T18:06:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-051105","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-051105","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_07_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T18:06:59Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0229 18:18:45.813786   30631 pod_ready.go:92] pod "kube-proxy-wvhlx" in "kube-system" namespace has status "Ready":"True"
	I0229 18:18:45.813806   30631 pod_ready.go:81] duration metric: took 400.707058ms waiting for pod "kube-proxy-wvhlx" in "kube-system" namespace to be "Ready" ...
	I0229 18:18:45.813817   30631 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-051105" in "kube-system" namespace to be "Ready" ...
	I0229 18:18:46.009787   30631 request.go:629] Waited for 195.911586ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-051105
	I0229 18:18:46.009883   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-051105
	I0229 18:18:46.009894   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:46.009909   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:46.009921   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:46.013056   30631 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 18:18:46.013080   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:46.013090   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:46.013097   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:46.013102   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:46.013105   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:45 GMT
	I0229 18:18:46.013109   30631 round_trippers.go:580]     Audit-Id: 9f64123b-2bb8-44a9-b329-e13df3aa657a
	I0229 18:18:46.013113   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:46.013250   30631 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-051105","namespace":"kube-system","uid":"de579522-4a2a-4a66-86f0-8fd37603bb85","resourceVersion":"949","creationTimestamp":"2024-02-29T18:07:00Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"16c1e8bd6ccedfe92575733385fa4d81","kubernetes.io/config.mirror":"16c1e8bd6ccedfe92575733385fa4d81","kubernetes.io/config.seen":"2024-02-29T18:06:55.285517129Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-051105","uid":"614122aa-9203-4f41-a34b-07331562af09","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:07:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4646 chars]
	I0229 18:18:46.209032   30631 request.go:629] Waited for 195.294707ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/nodes/multinode-051105
	I0229 18:18:46.209118   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/multinode-051105
	I0229 18:18:46.209129   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:46.209141   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:46.209157   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:46.211959   30631 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:18:46.211986   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:46.211996   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:46 GMT
	I0229 18:18:46.212001   30631 round_trippers.go:580]     Audit-Id: 97fb3c52-3de7-4566-9973-66e490d25314
	I0229 18:18:46.212006   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:46.212010   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:46.212014   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:46.212017   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:46.212176   30631 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-051105","uid":"614122aa-9203-4f41-a34b-07331562af09","resourceVersion":"922","creationTimestamp":"2024-02-29T18:06:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-051105","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-051105","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_07_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T18:06:59Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0229 18:18:46.212601   30631 pod_ready.go:92] pod "kube-scheduler-multinode-051105" in "kube-system" namespace has status "Ready":"True"
	I0229 18:18:46.212622   30631 pod_ready.go:81] duration metric: took 398.795116ms waiting for pod "kube-scheduler-multinode-051105" in "kube-system" namespace to be "Ready" ...
	I0229 18:18:46.212645   30631 pod_ready.go:38] duration metric: took 12.721920602s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 18:18:46.212665   30631 api_server.go:52] waiting for apiserver process to appear ...
	I0229 18:18:46.212728   30631 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:18:46.244496   30631 command_runner.go:130] > 1086
	I0229 18:18:46.244537   30631 api_server.go:72] duration metric: took 14.379185925s to wait for apiserver process to appear ...
	I0229 18:18:46.244548   30631 api_server.go:88] waiting for apiserver healthz status ...
	I0229 18:18:46.244568   30631 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I0229 18:18:46.257190   30631 api_server.go:279] https://192.168.39.200:8443/healthz returned 200:
	ok
	I0229 18:18:46.257255   30631 round_trippers.go:463] GET https://192.168.39.200:8443/version
	I0229 18:18:46.257260   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:46.257267   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:46.257274   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:46.258274   30631 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0229 18:18:46.258290   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:46.258298   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:46.258304   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:46.258310   30631 round_trippers.go:580]     Content-Length: 264
	I0229 18:18:46.258314   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:46 GMT
	I0229 18:18:46.258320   30631 round_trippers.go:580]     Audit-Id: 909853c7-6148-4bab-b594-46604a84a7cc
	I0229 18:18:46.258324   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:46.258330   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:46.258380   30631 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0229 18:18:46.258429   30631 api_server.go:141] control plane version: v1.28.4
	I0229 18:18:46.258445   30631 api_server.go:131] duration metric: took 13.891311ms to wait for apiserver health ...
	I0229 18:18:46.258452   30631 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 18:18:46.409849   30631 request.go:629] Waited for 151.299384ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods
	I0229 18:18:46.409907   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods
	I0229 18:18:46.409912   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:46.409919   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:46.409925   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:46.414134   30631 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 18:18:46.414154   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:46.414162   30631 round_trippers.go:580]     Audit-Id: ad5d1dfd-09ed-45e5-a6ad-f1574a7fc78f
	I0229 18:18:46.414166   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:46.414170   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:46.414175   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:46.414180   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:46.414184   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:46 GMT
	I0229 18:18:46.417022   30631 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"965"},"items":[{"metadata":{"name":"coredns-5dd5756b68-bwhnb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a3853502-49ad-4d24-8c63-3000e4f4aa8e","resourceVersion":"958","creationTimestamp":"2024-02-29T18:07:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"706ca334-c077-4f00-987d-5093fda91a27","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:07:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"706ca334-c077-4f00-987d-5093fda91a27\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 81437 chars]
	I0229 18:18:46.419837   30631 system_pods.go:59] 12 kube-system pods found
	I0229 18:18:46.419865   30631 system_pods.go:61] "coredns-5dd5756b68-bwhnb" [a3853502-49ad-4d24-8c63-3000e4f4aa8e] Running
	I0229 18:18:46.419872   30631 system_pods.go:61] "etcd-multinode-051105" [e73d8125-9770-4ddf-a382-a19adc1ed94f] Running
	I0229 18:18:46.419881   30631 system_pods.go:61] "kindnet-c2ztr" [c5679d05-61cd-4fc6-8fc0-93481b041891] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0229 18:18:46.419890   30631 system_pods.go:61] "kindnet-kvkf2" [207f0896-6db7-45e5-9278-bffc8efa19c1] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0229 18:18:46.419896   30631 system_pods.go:61] "kindnet-r2q5q" [4cdb5152-fbe1-4c9c-88ac-ec1fa682f3d9] Running
	I0229 18:18:46.419902   30631 system_pods.go:61] "kube-apiserver-multinode-051105" [722abb81-d303-4fa9-bcbb-8c16aaf4421d] Running
	I0229 18:18:46.419912   30631 system_pods.go:61] "kube-controller-manager-multinode-051105" [a3156cba-a585-47c6-8b26-2069af0021ce] Running
	I0229 18:18:46.419928   30631 system_pods.go:61] "kube-proxy-cbl8s" [352ba5ff-0a79-4766-8a3f-a0860aad1b91] Running
	I0229 18:18:46.419939   30631 system_pods.go:61] "kube-proxy-jfw9f" [45e1b79c-2d6b-4169-a6f0-a3949ec4bc6f] Running
	I0229 18:18:46.419944   30631 system_pods.go:61] "kube-proxy-wvhlx" [5548dfdd-2cda-48bc-9359-95eda53437d4] Running
	I0229 18:18:46.419949   30631 system_pods.go:61] "kube-scheduler-multinode-051105" [de579522-4a2a-4a66-86f0-8fd37603bb85] Running
	I0229 18:18:46.419957   30631 system_pods.go:61] "storage-provisioner" [40d74dfd-e4ca-4a17-bed1-24ab6dfd37b4] Running
	I0229 18:18:46.419968   30631 system_pods.go:74] duration metric: took 161.510069ms to wait for pod list to return data ...
	I0229 18:18:46.419981   30631 default_sa.go:34] waiting for default service account to be created ...
	I0229 18:18:46.609400   30631 request.go:629] Waited for 189.336404ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/namespaces/default/serviceaccounts
	I0229 18:18:46.609469   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/default/serviceaccounts
	I0229 18:18:46.609474   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:46.609481   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:46.609487   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:46.612413   30631 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:18:46.612430   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:46.612437   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:46 GMT
	I0229 18:18:46.612440   30631 round_trippers.go:580]     Audit-Id: a6dd0503-1524-4398-bb7b-4f3daa196741
	I0229 18:18:46.612443   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:46.612446   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:46.612449   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:46.612453   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:46.612455   30631 round_trippers.go:580]     Content-Length: 261
	I0229 18:18:46.612606   30631 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"968"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"c76d00ad-3203-465b-847f-6c1c6718b225","resourceVersion":"369","creationTimestamp":"2024-02-29T18:07:14Z"}}]}
	I0229 18:18:46.612811   30631 default_sa.go:45] found service account: "default"
	I0229 18:18:46.612835   30631 default_sa.go:55] duration metric: took 192.845216ms for default service account to be created ...
	I0229 18:18:46.612844   30631 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 18:18:46.809213   30631 request.go:629] Waited for 196.312269ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods
	I0229 18:18:46.809275   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods
	I0229 18:18:46.809291   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:46.809299   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:46.809306   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:46.813512   30631 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 18:18:46.813542   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:46.813552   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:46.813562   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:46.813567   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:46 GMT
	I0229 18:18:46.813573   30631 round_trippers.go:580]     Audit-Id: 3f2ee795-3ceb-477b-a807-c85db05857cf
	I0229 18:18:46.813576   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:46.813581   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:46.815199   30631 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"968"},"items":[{"metadata":{"name":"coredns-5dd5756b68-bwhnb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a3853502-49ad-4d24-8c63-3000e4f4aa8e","resourceVersion":"958","creationTimestamp":"2024-02-29T18:07:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"706ca334-c077-4f00-987d-5093fda91a27","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:07:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"706ca334-c077-4f00-987d-5093fda91a27\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 81437 chars]
	I0229 18:18:46.817490   30631 system_pods.go:86] 12 kube-system pods found
	I0229 18:18:46.817509   30631 system_pods.go:89] "coredns-5dd5756b68-bwhnb" [a3853502-49ad-4d24-8c63-3000e4f4aa8e] Running
	I0229 18:18:46.817514   30631 system_pods.go:89] "etcd-multinode-051105" [e73d8125-9770-4ddf-a382-a19adc1ed94f] Running
	I0229 18:18:46.817521   30631 system_pods.go:89] "kindnet-c2ztr" [c5679d05-61cd-4fc6-8fc0-93481b041891] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0229 18:18:46.817526   30631 system_pods.go:89] "kindnet-kvkf2" [207f0896-6db7-45e5-9278-bffc8efa19c1] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0229 18:18:46.817533   30631 system_pods.go:89] "kindnet-r2q5q" [4cdb5152-fbe1-4c9c-88ac-ec1fa682f3d9] Running
	I0229 18:18:46.817537   30631 system_pods.go:89] "kube-apiserver-multinode-051105" [722abb81-d303-4fa9-bcbb-8c16aaf4421d] Running
	I0229 18:18:46.817541   30631 system_pods.go:89] "kube-controller-manager-multinode-051105" [a3156cba-a585-47c6-8b26-2069af0021ce] Running
	I0229 18:18:46.817545   30631 system_pods.go:89] "kube-proxy-cbl8s" [352ba5ff-0a79-4766-8a3f-a0860aad1b91] Running
	I0229 18:18:46.817549   30631 system_pods.go:89] "kube-proxy-jfw9f" [45e1b79c-2d6b-4169-a6f0-a3949ec4bc6f] Running
	I0229 18:18:46.817552   30631 system_pods.go:89] "kube-proxy-wvhlx" [5548dfdd-2cda-48bc-9359-95eda53437d4] Running
	I0229 18:18:46.817556   30631 system_pods.go:89] "kube-scheduler-multinode-051105" [de579522-4a2a-4a66-86f0-8fd37603bb85] Running
	I0229 18:18:46.817559   30631 system_pods.go:89] "storage-provisioner" [40d74dfd-e4ca-4a17-bed1-24ab6dfd37b4] Running
	I0229 18:18:46.817565   30631 system_pods.go:126] duration metric: took 204.713317ms to wait for k8s-apps to be running ...
	I0229 18:18:46.817574   30631 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 18:18:46.817621   30631 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 18:18:46.832480   30631 system_svc.go:56] duration metric: took 14.897035ms WaitForService to wait for kubelet.
	I0229 18:18:46.832513   30631 kubeadm.go:581] duration metric: took 14.967161467s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 18:18:46.832537   30631 node_conditions.go:102] verifying NodePressure condition ...
	I0229 18:18:47.009878   30631 request.go:629] Waited for 177.271451ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/nodes
	I0229 18:18:47.009952   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes
	I0229 18:18:47.009962   30631 round_trippers.go:469] Request Headers:
	I0229 18:18:47.009970   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:18:47.009974   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:18:47.014222   30631 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 18:18:47.014242   30631 round_trippers.go:577] Response Headers:
	I0229 18:18:47.014248   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:18:47.014251   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:18:47.014257   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:18:47.014260   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:18:47.014263   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:18:46 GMT
	I0229 18:18:47.014265   30631 round_trippers.go:580]     Audit-Id: bc4e32ed-2bde-4543-aa58-053dc3057301
	I0229 18:18:47.014829   30631 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"968"},"items":[{"metadata":{"name":"multinode-051105","uid":"614122aa-9203-4f41-a34b-07331562af09","resourceVersion":"922","creationTimestamp":"2024-02-29T18:06:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-051105","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-051105","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_07_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 16179 chars]
	I0229 18:18:47.015458   30631 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 18:18:47.015477   30631 node_conditions.go:123] node cpu capacity is 2
	I0229 18:18:47.015489   30631 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 18:18:47.015495   30631 node_conditions.go:123] node cpu capacity is 2
	I0229 18:18:47.015499   30631 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 18:18:47.015504   30631 node_conditions.go:123] node cpu capacity is 2
	I0229 18:18:47.015512   30631 node_conditions.go:105] duration metric: took 182.968063ms to run NodePressure ...
	I0229 18:18:47.015531   30631 start.go:228] waiting for startup goroutines ...
	I0229 18:18:47.015548   30631 start.go:233] waiting for cluster config update ...
	I0229 18:18:47.015559   30631 start.go:242] writing updated cluster config ...
	I0229 18:18:47.015983   30631 config.go:182] Loaded profile config "multinode-051105": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 18:18:47.016094   30631 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/multinode-051105/config.json ...
	I0229 18:18:47.019791   30631 out.go:177] * Starting worker node multinode-051105-m02 in cluster multinode-051105
	I0229 18:18:47.021088   30631 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0229 18:18:47.021109   30631 cache.go:56] Caching tarball of preloaded images
	I0229 18:18:47.021199   30631 preload.go:174] Found /home/jenkins/minikube-integration/18259-6428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0229 18:18:47.021211   30631 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0229 18:18:47.021315   30631 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/multinode-051105/config.json ...
	I0229 18:18:47.021490   30631 start.go:365] acquiring machines lock for multinode-051105-m02: {Name:mk08b242c6b6b7cd4a6843a7d3821a8b435540dd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 18:18:47.021548   30631 start.go:369] acquired machines lock for "multinode-051105-m02" in 37.357µs
	I0229 18:18:47.021566   30631 start.go:96] Skipping create...Using existing machine configuration
	I0229 18:18:47.021576   30631 fix.go:54] fixHost starting: m02
	I0229 18:18:47.021826   30631 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 18:18:47.021873   30631 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:18:47.036591   30631 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33279
	I0229 18:18:47.037015   30631 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:18:47.037492   30631 main.go:141] libmachine: Using API Version  1
	I0229 18:18:47.037518   30631 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:18:47.037814   30631 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:18:47.037995   30631 main.go:141] libmachine: (multinode-051105-m02) Calling .DriverName
	I0229 18:18:47.038133   30631 main.go:141] libmachine: (multinode-051105-m02) Calling .GetState
	I0229 18:18:47.039552   30631 fix.go:102] recreateIfNeeded on multinode-051105-m02: state=Running err=<nil>
	W0229 18:18:47.039568   30631 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 18:18:47.041322   30631 out.go:177] * Updating the running kvm2 "multinode-051105-m02" VM ...
	I0229 18:18:47.042461   30631 machine.go:88] provisioning docker machine ...
	I0229 18:18:47.042478   30631 main.go:141] libmachine: (multinode-051105-m02) Calling .DriverName
	I0229 18:18:47.042665   30631 main.go:141] libmachine: (multinode-051105-m02) Calling .GetMachineName
	I0229 18:18:47.042798   30631 buildroot.go:166] provisioning hostname "multinode-051105-m02"
	I0229 18:18:47.042810   30631 main.go:141] libmachine: (multinode-051105-m02) Calling .GetMachineName
	I0229 18:18:47.042908   30631 main.go:141] libmachine: (multinode-051105-m02) Calling .GetSSHHostname
	I0229 18:18:47.045322   30631 main.go:141] libmachine: (multinode-051105-m02) DBG | domain multinode-051105-m02 has defined MAC address 52:54:00:b7:b8:d5 in network mk-multinode-051105
	I0229 18:18:47.045663   30631 main.go:141] libmachine: (multinode-051105-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:b8:d5", ip: ""} in network mk-multinode-051105: {Iface:virbr1 ExpiryTime:2024-02-29 19:07:39 +0000 UTC Type:0 Mac:52:54:00:b7:b8:d5 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:multinode-051105-m02 Clientid:01:52:54:00:b7:b8:d5}
	I0229 18:18:47.045684   30631 main.go:141] libmachine: (multinode-051105-m02) DBG | domain multinode-051105-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:b7:b8:d5 in network mk-multinode-051105
	I0229 18:18:47.045805   30631 main.go:141] libmachine: (multinode-051105-m02) Calling .GetSSHPort
	I0229 18:18:47.045979   30631 main.go:141] libmachine: (multinode-051105-m02) Calling .GetSSHKeyPath
	I0229 18:18:47.046103   30631 main.go:141] libmachine: (multinode-051105-m02) Calling .GetSSHKeyPath
	I0229 18:18:47.046222   30631 main.go:141] libmachine: (multinode-051105-m02) Calling .GetSSHUsername
	I0229 18:18:47.046351   30631 main.go:141] libmachine: Using SSH client type: native
	I0229 18:18:47.046554   30631 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0229 18:18:47.046572   30631 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-051105-m02 && echo "multinode-051105-m02" | sudo tee /etc/hostname
	I0229 18:18:47.179098   30631 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-051105-m02
	
	I0229 18:18:47.179134   30631 main.go:141] libmachine: (multinode-051105-m02) Calling .GetSSHHostname
	I0229 18:18:47.181832   30631 main.go:141] libmachine: (multinode-051105-m02) DBG | domain multinode-051105-m02 has defined MAC address 52:54:00:b7:b8:d5 in network mk-multinode-051105
	I0229 18:18:47.182196   30631 main.go:141] libmachine: (multinode-051105-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:b8:d5", ip: ""} in network mk-multinode-051105: {Iface:virbr1 ExpiryTime:2024-02-29 19:07:39 +0000 UTC Type:0 Mac:52:54:00:b7:b8:d5 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:multinode-051105-m02 Clientid:01:52:54:00:b7:b8:d5}
	I0229 18:18:47.182241   30631 main.go:141] libmachine: (multinode-051105-m02) DBG | domain multinode-051105-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:b7:b8:d5 in network mk-multinode-051105
	I0229 18:18:47.182392   30631 main.go:141] libmachine: (multinode-051105-m02) Calling .GetSSHPort
	I0229 18:18:47.182577   30631 main.go:141] libmachine: (multinode-051105-m02) Calling .GetSSHKeyPath
	I0229 18:18:47.182760   30631 main.go:141] libmachine: (multinode-051105-m02) Calling .GetSSHKeyPath
	I0229 18:18:47.182893   30631 main.go:141] libmachine: (multinode-051105-m02) Calling .GetSSHUsername
	I0229 18:18:47.183082   30631 main.go:141] libmachine: Using SSH client type: native
	I0229 18:18:47.183237   30631 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0229 18:18:47.183254   30631 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-051105-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-051105-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-051105-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 18:18:47.296168   30631 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 18:18:47.296195   30631 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18259-6428/.minikube CaCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18259-6428/.minikube}
	I0229 18:18:47.296208   30631 buildroot.go:174] setting up certificates
	I0229 18:18:47.296215   30631 provision.go:83] configureAuth start
	I0229 18:18:47.296224   30631 main.go:141] libmachine: (multinode-051105-m02) Calling .GetMachineName
	I0229 18:18:47.296480   30631 main.go:141] libmachine: (multinode-051105-m02) Calling .GetIP
	I0229 18:18:47.299226   30631 main.go:141] libmachine: (multinode-051105-m02) DBG | domain multinode-051105-m02 has defined MAC address 52:54:00:b7:b8:d5 in network mk-multinode-051105
	I0229 18:18:47.299670   30631 main.go:141] libmachine: (multinode-051105-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:b8:d5", ip: ""} in network mk-multinode-051105: {Iface:virbr1 ExpiryTime:2024-02-29 19:07:39 +0000 UTC Type:0 Mac:52:54:00:b7:b8:d5 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:multinode-051105-m02 Clientid:01:52:54:00:b7:b8:d5}
	I0229 18:18:47.299690   30631 main.go:141] libmachine: (multinode-051105-m02) DBG | domain multinode-051105-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:b7:b8:d5 in network mk-multinode-051105
	I0229 18:18:47.299890   30631 main.go:141] libmachine: (multinode-051105-m02) Calling .GetSSHHostname
	I0229 18:18:47.302383   30631 main.go:141] libmachine: (multinode-051105-m02) DBG | domain multinode-051105-m02 has defined MAC address 52:54:00:b7:b8:d5 in network mk-multinode-051105
	I0229 18:18:47.302843   30631 main.go:141] libmachine: (multinode-051105-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:b8:d5", ip: ""} in network mk-multinode-051105: {Iface:virbr1 ExpiryTime:2024-02-29 19:07:39 +0000 UTC Type:0 Mac:52:54:00:b7:b8:d5 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:multinode-051105-m02 Clientid:01:52:54:00:b7:b8:d5}
	I0229 18:18:47.302871   30631 main.go:141] libmachine: (multinode-051105-m02) DBG | domain multinode-051105-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:b7:b8:d5 in network mk-multinode-051105
	I0229 18:18:47.302987   30631 provision.go:138] copyHostCerts
	I0229 18:18:47.303016   30631 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem
	I0229 18:18:47.303061   30631 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem, removing ...
	I0229 18:18:47.303073   30631 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem
	I0229 18:18:47.303151   30631 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem (1675 bytes)
	I0229 18:18:47.303241   30631 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem
	I0229 18:18:47.303265   30631 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem, removing ...
	I0229 18:18:47.303275   30631 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem
	I0229 18:18:47.303317   30631 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem (1082 bytes)
	I0229 18:18:47.303395   30631 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem
	I0229 18:18:47.303422   30631 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem, removing ...
	I0229 18:18:47.303431   30631 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem
	I0229 18:18:47.303465   30631 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem (1123 bytes)
	I0229 18:18:47.303532   30631 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem org=jenkins.multinode-051105-m02 san=[192.168.39.104 192.168.39.104 localhost 127.0.0.1 minikube multinode-051105-m02]
	I0229 18:18:47.615670   30631 provision.go:172] copyRemoteCerts
	I0229 18:18:47.615724   30631 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 18:18:47.615745   30631 main.go:141] libmachine: (multinode-051105-m02) Calling .GetSSHHostname
	I0229 18:18:47.618291   30631 main.go:141] libmachine: (multinode-051105-m02) DBG | domain multinode-051105-m02 has defined MAC address 52:54:00:b7:b8:d5 in network mk-multinode-051105
	I0229 18:18:47.618644   30631 main.go:141] libmachine: (multinode-051105-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:b8:d5", ip: ""} in network mk-multinode-051105: {Iface:virbr1 ExpiryTime:2024-02-29 19:07:39 +0000 UTC Type:0 Mac:52:54:00:b7:b8:d5 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:multinode-051105-m02 Clientid:01:52:54:00:b7:b8:d5}
	I0229 18:18:47.618674   30631 main.go:141] libmachine: (multinode-051105-m02) DBG | domain multinode-051105-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:b7:b8:d5 in network mk-multinode-051105
	I0229 18:18:47.618863   30631 main.go:141] libmachine: (multinode-051105-m02) Calling .GetSSHPort
	I0229 18:18:47.619096   30631 main.go:141] libmachine: (multinode-051105-m02) Calling .GetSSHKeyPath
	I0229 18:18:47.619268   30631 main.go:141] libmachine: (multinode-051105-m02) Calling .GetSSHUsername
	I0229 18:18:47.619447   30631 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/multinode-051105-m02/id_rsa Username:docker}
	I0229 18:18:47.706418   30631 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0229 18:18:47.706493   30631 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 18:18:47.733636   30631 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0229 18:18:47.733683   30631 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0229 18:18:47.761468   30631 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0229 18:18:47.761528   30631 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0229 18:18:47.787900   30631 provision.go:86] duration metric: configureAuth took 491.67586ms
	I0229 18:18:47.787923   30631 buildroot.go:189] setting minikube options for container-runtime
	I0229 18:18:47.788135   30631 config.go:182] Loaded profile config "multinode-051105": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 18:18:47.788210   30631 main.go:141] libmachine: (multinode-051105-m02) Calling .GetSSHHostname
	I0229 18:18:47.790975   30631 main.go:141] libmachine: (multinode-051105-m02) DBG | domain multinode-051105-m02 has defined MAC address 52:54:00:b7:b8:d5 in network mk-multinode-051105
	I0229 18:18:47.791358   30631 main.go:141] libmachine: (multinode-051105-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:b8:d5", ip: ""} in network mk-multinode-051105: {Iface:virbr1 ExpiryTime:2024-02-29 19:07:39 +0000 UTC Type:0 Mac:52:54:00:b7:b8:d5 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:multinode-051105-m02 Clientid:01:52:54:00:b7:b8:d5}
	I0229 18:18:47.791385   30631 main.go:141] libmachine: (multinode-051105-m02) DBG | domain multinode-051105-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:b7:b8:d5 in network mk-multinode-051105
	I0229 18:18:47.791572   30631 main.go:141] libmachine: (multinode-051105-m02) Calling .GetSSHPort
	I0229 18:18:47.791766   30631 main.go:141] libmachine: (multinode-051105-m02) Calling .GetSSHKeyPath
	I0229 18:18:47.791937   30631 main.go:141] libmachine: (multinode-051105-m02) Calling .GetSSHKeyPath
	I0229 18:18:47.792074   30631 main.go:141] libmachine: (multinode-051105-m02) Calling .GetSSHUsername
	I0229 18:18:47.792221   30631 main.go:141] libmachine: Using SSH client type: native
	I0229 18:18:47.792384   30631 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0229 18:18:47.792399   30631 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0229 18:20:18.324973   30631 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0229 18:20:18.325007   30631 machine.go:91] provisioned docker machine in 1m31.282531581s
	I0229 18:20:18.325022   30631 start.go:300] post-start starting for "multinode-051105-m02" (driver="kvm2")
	I0229 18:20:18.325037   30631 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 18:20:18.325059   30631 main.go:141] libmachine: (multinode-051105-m02) Calling .DriverName
	I0229 18:20:18.325414   30631 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 18:20:18.325439   30631 main.go:141] libmachine: (multinode-051105-m02) Calling .GetSSHHostname
	I0229 18:20:18.328630   30631 main.go:141] libmachine: (multinode-051105-m02) DBG | domain multinode-051105-m02 has defined MAC address 52:54:00:b7:b8:d5 in network mk-multinode-051105
	I0229 18:20:18.329042   30631 main.go:141] libmachine: (multinode-051105-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:b8:d5", ip: ""} in network mk-multinode-051105: {Iface:virbr1 ExpiryTime:2024-02-29 19:07:39 +0000 UTC Type:0 Mac:52:54:00:b7:b8:d5 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:multinode-051105-m02 Clientid:01:52:54:00:b7:b8:d5}
	I0229 18:20:18.329068   30631 main.go:141] libmachine: (multinode-051105-m02) DBG | domain multinode-051105-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:b7:b8:d5 in network mk-multinode-051105
	I0229 18:20:18.329187   30631 main.go:141] libmachine: (multinode-051105-m02) Calling .GetSSHPort
	I0229 18:20:18.329363   30631 main.go:141] libmachine: (multinode-051105-m02) Calling .GetSSHKeyPath
	I0229 18:20:18.329512   30631 main.go:141] libmachine: (multinode-051105-m02) Calling .GetSSHUsername
	I0229 18:20:18.329621   30631 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/multinode-051105-m02/id_rsa Username:docker}
	I0229 18:20:18.417194   30631 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 18:20:18.422749   30631 command_runner.go:130] > NAME=Buildroot
	I0229 18:20:18.422775   30631 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0229 18:20:18.422782   30631 command_runner.go:130] > ID=buildroot
	I0229 18:20:18.422788   30631 command_runner.go:130] > VERSION_ID=2023.02.9
	I0229 18:20:18.422795   30631 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0229 18:20:18.422946   30631 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 18:20:18.422993   30631 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6428/.minikube/addons for local assets ...
	I0229 18:20:18.423117   30631 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6428/.minikube/files for local assets ...
	I0229 18:20:18.423215   30631 filesync.go:149] local asset: /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem -> 136512.pem in /etc/ssl/certs
	I0229 18:20:18.423229   30631 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem -> /etc/ssl/certs/136512.pem
	I0229 18:20:18.423321   30631 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 18:20:18.433587   30631 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem --> /etc/ssl/certs/136512.pem (1708 bytes)
	I0229 18:20:18.462533   30631 start.go:303] post-start completed in 137.494657ms
	I0229 18:20:18.462561   30631 fix.go:56] fixHost completed within 1m31.440984554s
	I0229 18:20:18.462584   30631 main.go:141] libmachine: (multinode-051105-m02) Calling .GetSSHHostname
	I0229 18:20:18.465166   30631 main.go:141] libmachine: (multinode-051105-m02) DBG | domain multinode-051105-m02 has defined MAC address 52:54:00:b7:b8:d5 in network mk-multinode-051105
	I0229 18:20:18.465548   30631 main.go:141] libmachine: (multinode-051105-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:b8:d5", ip: ""} in network mk-multinode-051105: {Iface:virbr1 ExpiryTime:2024-02-29 19:07:39 +0000 UTC Type:0 Mac:52:54:00:b7:b8:d5 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:multinode-051105-m02 Clientid:01:52:54:00:b7:b8:d5}
	I0229 18:20:18.465581   30631 main.go:141] libmachine: (multinode-051105-m02) DBG | domain multinode-051105-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:b7:b8:d5 in network mk-multinode-051105
	I0229 18:20:18.465741   30631 main.go:141] libmachine: (multinode-051105-m02) Calling .GetSSHPort
	I0229 18:20:18.465932   30631 main.go:141] libmachine: (multinode-051105-m02) Calling .GetSSHKeyPath
	I0229 18:20:18.466168   30631 main.go:141] libmachine: (multinode-051105-m02) Calling .GetSSHKeyPath
	I0229 18:20:18.466399   30631 main.go:141] libmachine: (multinode-051105-m02) Calling .GetSSHUsername
	I0229 18:20:18.466624   30631 main.go:141] libmachine: Using SSH client type: native
	I0229 18:20:18.466835   30631 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0229 18:20:18.466852   30631 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 18:20:18.580290   30631 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709230818.548925135
	
	I0229 18:20:18.580313   30631 fix.go:206] guest clock: 1709230818.548925135
	I0229 18:20:18.580323   30631 fix.go:219] Guest: 2024-02-29 18:20:18.548925135 +0000 UTC Remote: 2024-02-29 18:20:18.462565566 +0000 UTC m=+450.383819004 (delta=86.359569ms)
	I0229 18:20:18.580341   30631 fix.go:190] guest clock delta is within tolerance: 86.359569ms
	I0229 18:20:18.580346   30631 start.go:83] releasing machines lock for "multinode-051105-m02", held for 1m31.558788219s
	I0229 18:20:18.580365   30631 main.go:141] libmachine: (multinode-051105-m02) Calling .DriverName
	I0229 18:20:18.580623   30631 main.go:141] libmachine: (multinode-051105-m02) Calling .GetIP
	I0229 18:20:18.583355   30631 main.go:141] libmachine: (multinode-051105-m02) DBG | domain multinode-051105-m02 has defined MAC address 52:54:00:b7:b8:d5 in network mk-multinode-051105
	I0229 18:20:18.583690   30631 main.go:141] libmachine: (multinode-051105-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:b8:d5", ip: ""} in network mk-multinode-051105: {Iface:virbr1 ExpiryTime:2024-02-29 19:07:39 +0000 UTC Type:0 Mac:52:54:00:b7:b8:d5 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:multinode-051105-m02 Clientid:01:52:54:00:b7:b8:d5}
	I0229 18:20:18.583713   30631 main.go:141] libmachine: (multinode-051105-m02) DBG | domain multinode-051105-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:b7:b8:d5 in network mk-multinode-051105
	I0229 18:20:18.585648   30631 out.go:177] * Found network options:
	I0229 18:20:18.587201   30631 out.go:177]   - NO_PROXY=192.168.39.200
	W0229 18:20:18.588713   30631 proxy.go:119] fail to check proxy env: Error ip not in block
	I0229 18:20:18.588750   30631 main.go:141] libmachine: (multinode-051105-m02) Calling .DriverName
	I0229 18:20:18.589377   30631 main.go:141] libmachine: (multinode-051105-m02) Calling .DriverName
	I0229 18:20:18.589554   30631 main.go:141] libmachine: (multinode-051105-m02) Calling .DriverName
	I0229 18:20:18.589673   30631 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 18:20:18.589713   30631 main.go:141] libmachine: (multinode-051105-m02) Calling .GetSSHHostname
	W0229 18:20:18.589745   30631 proxy.go:119] fail to check proxy env: Error ip not in block
	I0229 18:20:18.589821   30631 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0229 18:20:18.589843   30631 main.go:141] libmachine: (multinode-051105-m02) Calling .GetSSHHostname
	I0229 18:20:18.592561   30631 main.go:141] libmachine: (multinode-051105-m02) DBG | domain multinode-051105-m02 has defined MAC address 52:54:00:b7:b8:d5 in network mk-multinode-051105
	I0229 18:20:18.592762   30631 main.go:141] libmachine: (multinode-051105-m02) DBG | domain multinode-051105-m02 has defined MAC address 52:54:00:b7:b8:d5 in network mk-multinode-051105
	I0229 18:20:18.593002   30631 main.go:141] libmachine: (multinode-051105-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:b8:d5", ip: ""} in network mk-multinode-051105: {Iface:virbr1 ExpiryTime:2024-02-29 19:07:39 +0000 UTC Type:0 Mac:52:54:00:b7:b8:d5 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:multinode-051105-m02 Clientid:01:52:54:00:b7:b8:d5}
	I0229 18:20:18.593034   30631 main.go:141] libmachine: (multinode-051105-m02) DBG | domain multinode-051105-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:b7:b8:d5 in network mk-multinode-051105
	I0229 18:20:18.593177   30631 main.go:141] libmachine: (multinode-051105-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:b8:d5", ip: ""} in network mk-multinode-051105: {Iface:virbr1 ExpiryTime:2024-02-29 19:07:39 +0000 UTC Type:0 Mac:52:54:00:b7:b8:d5 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:multinode-051105-m02 Clientid:01:52:54:00:b7:b8:d5}
	I0229 18:20:18.593197   30631 main.go:141] libmachine: (multinode-051105-m02) DBG | domain multinode-051105-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:b7:b8:d5 in network mk-multinode-051105
	I0229 18:20:18.593202   30631 main.go:141] libmachine: (multinode-051105-m02) Calling .GetSSHPort
	I0229 18:20:18.593380   30631 main.go:141] libmachine: (multinode-051105-m02) Calling .GetSSHPort
	I0229 18:20:18.593394   30631 main.go:141] libmachine: (multinode-051105-m02) Calling .GetSSHKeyPath
	I0229 18:20:18.593558   30631 main.go:141] libmachine: (multinode-051105-m02) Calling .GetSSHKeyPath
	I0229 18:20:18.593582   30631 main.go:141] libmachine: (multinode-051105-m02) Calling .GetSSHUsername
	I0229 18:20:18.593764   30631 main.go:141] libmachine: (multinode-051105-m02) Calling .GetSSHUsername
	I0229 18:20:18.593802   30631 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/multinode-051105-m02/id_rsa Username:docker}
	I0229 18:20:18.593901   30631 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/multinode-051105-m02/id_rsa Username:docker}
	I0229 18:20:18.835929   30631 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0229 18:20:18.835941   30631 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0229 18:20:18.842762   30631 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0229 18:20:18.842807   30631 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 18:20:18.842877   30631 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 18:20:18.853201   30631 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0229 18:20:18.853224   30631 start.go:475] detecting cgroup driver to use...
	I0229 18:20:18.853276   30631 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 18:20:18.870399   30631 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 18:20:18.885204   30631 docker.go:217] disabling cri-docker service (if available) ...
	I0229 18:20:18.885258   30631 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 18:20:18.901299   30631 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 18:20:18.917633   30631 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 18:20:19.048051   30631 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 18:20:19.184122   30631 docker.go:233] disabling docker service ...
	I0229 18:20:19.184191   30631 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 18:20:19.204120   30631 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 18:20:19.219860   30631 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 18:20:19.355162   30631 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 18:20:19.494390   30631 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 18:20:19.510681   30631 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 18:20:19.532317   30631 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0229 18:20:19.532351   30631 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0229 18:20:19.532389   30631 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:20:19.544137   30631 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0229 18:20:19.544203   30631 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:20:19.557239   30631 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:20:19.569889   30631 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:20:19.581100   30631 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 18:20:19.593975   30631 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 18:20:19.604825   30631 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0229 18:20:19.604898   30631 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 18:20:19.616280   30631 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:20:19.742827   30631 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0229 18:20:19.938879   30631 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0229 18:20:19.938973   30631 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0229 18:20:19.944652   30631 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0229 18:20:19.944673   30631 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0229 18:20:19.944683   30631 command_runner.go:130] > Device: 0,22	Inode: 1176        Links: 1
	I0229 18:20:19.944693   30631 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0229 18:20:19.944700   30631 command_runner.go:130] > Access: 2024-02-29 18:20:19.860246272 +0000
	I0229 18:20:19.944709   30631 command_runner.go:130] > Modify: 2024-02-29 18:20:19.860246272 +0000
	I0229 18:20:19.944716   30631 command_runner.go:130] > Change: 2024-02-29 18:20:19.860246272 +0000
	I0229 18:20:19.944722   30631 command_runner.go:130] >  Birth: -
	I0229 18:20:19.944768   30631 start.go:543] Will wait 60s for crictl version
	I0229 18:20:19.944806   30631 ssh_runner.go:195] Run: which crictl
	I0229 18:20:19.948999   30631 command_runner.go:130] > /usr/bin/crictl
	I0229 18:20:19.949050   30631 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 18:20:19.993914   30631 command_runner.go:130] > Version:  0.1.0
	I0229 18:20:19.993934   30631 command_runner.go:130] > RuntimeName:  cri-o
	I0229 18:20:19.993938   30631 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0229 18:20:19.993943   30631 command_runner.go:130] > RuntimeApiVersion:  v1
	I0229 18:20:19.995152   30631 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0229 18:20:19.995225   30631 ssh_runner.go:195] Run: crio --version
	I0229 18:20:20.029524   30631 command_runner.go:130] > crio version 1.29.1
	I0229 18:20:20.029544   30631 command_runner.go:130] > Version:        1.29.1
	I0229 18:20:20.029550   30631 command_runner.go:130] > GitCommit:      unknown
	I0229 18:20:20.029554   30631 command_runner.go:130] > GitCommitDate:  unknown
	I0229 18:20:20.029559   30631 command_runner.go:130] > GitTreeState:   clean
	I0229 18:20:20.029567   30631 command_runner.go:130] > BuildDate:      2024-02-23T03:27:48Z
	I0229 18:20:20.029574   30631 command_runner.go:130] > GoVersion:      go1.21.6
	I0229 18:20:20.029580   30631 command_runner.go:130] > Compiler:       gc
	I0229 18:20:20.029587   30631 command_runner.go:130] > Platform:       linux/amd64
	I0229 18:20:20.029593   30631 command_runner.go:130] > Linkmode:       dynamic
	I0229 18:20:20.029600   30631 command_runner.go:130] > BuildTags:      
	I0229 18:20:20.029609   30631 command_runner.go:130] >   containers_image_ostree_stub
	I0229 18:20:20.029616   30631 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0229 18:20:20.029629   30631 command_runner.go:130] >   btrfs_noversion
	I0229 18:20:20.029634   30631 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0229 18:20:20.029638   30631 command_runner.go:130] >   libdm_no_deferred_remove
	I0229 18:20:20.029641   30631 command_runner.go:130] >   seccomp
	I0229 18:20:20.029645   30631 command_runner.go:130] > LDFlags:          unknown
	I0229 18:20:20.029653   30631 command_runner.go:130] > SeccompEnabled:   true
	I0229 18:20:20.029657   30631 command_runner.go:130] > AppArmorEnabled:  false
	I0229 18:20:20.029774   30631 ssh_runner.go:195] Run: crio --version
	I0229 18:20:20.065932   30631 command_runner.go:130] > crio version 1.29.1
	I0229 18:20:20.065958   30631 command_runner.go:130] > Version:        1.29.1
	I0229 18:20:20.065967   30631 command_runner.go:130] > GitCommit:      unknown
	I0229 18:20:20.065974   30631 command_runner.go:130] > GitCommitDate:  unknown
	I0229 18:20:20.065981   30631 command_runner.go:130] > GitTreeState:   clean
	I0229 18:20:20.065988   30631 command_runner.go:130] > BuildDate:      2024-02-23T03:27:48Z
	I0229 18:20:20.065995   30631 command_runner.go:130] > GoVersion:      go1.21.6
	I0229 18:20:20.066001   30631 command_runner.go:130] > Compiler:       gc
	I0229 18:20:20.066008   30631 command_runner.go:130] > Platform:       linux/amd64
	I0229 18:20:20.066019   30631 command_runner.go:130] > Linkmode:       dynamic
	I0229 18:20:20.066026   30631 command_runner.go:130] > BuildTags:      
	I0229 18:20:20.066036   30631 command_runner.go:130] >   containers_image_ostree_stub
	I0229 18:20:20.066046   30631 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0229 18:20:20.066052   30631 command_runner.go:130] >   btrfs_noversion
	I0229 18:20:20.066065   30631 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0229 18:20:20.066074   30631 command_runner.go:130] >   libdm_no_deferred_remove
	I0229 18:20:20.066083   30631 command_runner.go:130] >   seccomp
	I0229 18:20:20.066098   30631 command_runner.go:130] > LDFlags:          unknown
	I0229 18:20:20.066107   30631 command_runner.go:130] > SeccompEnabled:   true
	I0229 18:20:20.066117   30631 command_runner.go:130] > AppArmorEnabled:  false
	I0229 18:20:20.069274   30631 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0229 18:20:20.070565   30631 out.go:177]   - env NO_PROXY=192.168.39.200
	I0229 18:20:20.071819   30631 main.go:141] libmachine: (multinode-051105-m02) Calling .GetIP
	I0229 18:20:20.074543   30631 main.go:141] libmachine: (multinode-051105-m02) DBG | domain multinode-051105-m02 has defined MAC address 52:54:00:b7:b8:d5 in network mk-multinode-051105
	I0229 18:20:20.074944   30631 main.go:141] libmachine: (multinode-051105-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:b8:d5", ip: ""} in network mk-multinode-051105: {Iface:virbr1 ExpiryTime:2024-02-29 19:07:39 +0000 UTC Type:0 Mac:52:54:00:b7:b8:d5 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:multinode-051105-m02 Clientid:01:52:54:00:b7:b8:d5}
	I0229 18:20:20.074970   30631 main.go:141] libmachine: (multinode-051105-m02) DBG | domain multinode-051105-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:b7:b8:d5 in network mk-multinode-051105
	I0229 18:20:20.075204   30631 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0229 18:20:20.080525   30631 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0229 18:20:20.080567   30631 certs.go:56] Setting up /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/multinode-051105 for IP: 192.168.39.104
	I0229 18:20:20.080585   30631 certs.go:190] acquiring lock for shared ca certs: {Name:mk3505d2468da66ac20dc3ebf913782cecf1ba0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:20:20.080715   30631 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18259-6428/.minikube/ca.key
	I0229 18:20:20.080769   30631 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.key
	I0229 18:20:20.080787   30631 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0229 18:20:20.080808   30631 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6428/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0229 18:20:20.080825   30631 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0229 18:20:20.080839   30631 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0229 18:20:20.080907   30631 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651.pem (1338 bytes)
	W0229 18:20:20.080944   30631 certs.go:433] ignoring /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651_empty.pem, impossibly tiny 0 bytes
	I0229 18:20:20.080957   30631 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem (1675 bytes)
	I0229 18:20:20.080980   30631 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem (1082 bytes)
	I0229 18:20:20.081003   30631 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem (1123 bytes)
	I0229 18:20:20.081025   30631 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem (1675 bytes)
	I0229 18:20:20.081064   30631 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem (1708 bytes)
	I0229 18:20:20.081096   30631 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem -> /usr/share/ca-certificates/136512.pem
	I0229 18:20:20.081110   30631 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:20:20.081122   30631 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651.pem -> /usr/share/ca-certificates/13651.pem
	I0229 18:20:20.081452   30631 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 18:20:20.110603   30631 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0229 18:20:20.138409   30631 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 18:20:20.165685   30631 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0229 18:20:20.193030   30631 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem --> /usr/share/ca-certificates/136512.pem (1708 bytes)
	I0229 18:20:20.219664   30631 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 18:20:20.245868   30631 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651.pem --> /usr/share/ca-certificates/13651.pem (1338 bytes)
	I0229 18:20:20.272791   30631 ssh_runner.go:195] Run: openssl version
	I0229 18:20:20.279383   30631 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0229 18:20:20.279468   30631 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 18:20:20.292124   30631 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:20:20.297359   30631 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb 29 17:40 /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:20:20.297742   30631 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 17:40 /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:20:20.297794   30631 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:20:20.304026   30631 command_runner.go:130] > b5213941
	I0229 18:20:20.304268   30631 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 18:20:20.316859   30631 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13651.pem && ln -fs /usr/share/ca-certificates/13651.pem /etc/ssl/certs/13651.pem"
	I0229 18:20:20.329819   30631 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13651.pem
	I0229 18:20:20.335060   30631 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb 29 17:49 /usr/share/ca-certificates/13651.pem
	I0229 18:20:20.335220   30631 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 17:49 /usr/share/ca-certificates/13651.pem
	I0229 18:20:20.335273   30631 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13651.pem
	I0229 18:20:20.341388   30631 command_runner.go:130] > 51391683
	I0229 18:20:20.341547   30631 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13651.pem /etc/ssl/certs/51391683.0"
	I0229 18:20:20.351959   30631 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136512.pem && ln -fs /usr/share/ca-certificates/136512.pem /etc/ssl/certs/136512.pem"
	I0229 18:20:20.364259   30631 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136512.pem
	I0229 18:20:20.368899   30631 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb 29 17:49 /usr/share/ca-certificates/136512.pem
	I0229 18:20:20.368977   30631 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 17:49 /usr/share/ca-certificates/136512.pem
	I0229 18:20:20.369013   30631 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136512.pem
	I0229 18:20:20.374868   30631 command_runner.go:130] > 3ec20f2e
	I0229 18:20:20.374998   30631 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136512.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 18:20:20.384860   30631 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 18:20:20.389819   30631 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 18:20:20.389856   30631 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 18:20:20.389944   30631 ssh_runner.go:195] Run: crio config
	I0229 18:20:20.432351   30631 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0229 18:20:20.432377   30631 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0229 18:20:20.432387   30631 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0229 18:20:20.432391   30631 command_runner.go:130] > #
	I0229 18:20:20.432401   30631 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0229 18:20:20.432409   30631 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0229 18:20:20.432426   30631 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0229 18:20:20.432438   30631 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0229 18:20:20.432450   30631 command_runner.go:130] > # reload'.
	I0229 18:20:20.432460   30631 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0229 18:20:20.432473   30631 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0229 18:20:20.432487   30631 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0229 18:20:20.432500   30631 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0229 18:20:20.432509   30631 command_runner.go:130] > [crio]
	I0229 18:20:20.432520   30631 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0229 18:20:20.432531   30631 command_runner.go:130] > # containers images, in this directory.
	I0229 18:20:20.432542   30631 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0229 18:20:20.432558   30631 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0229 18:20:20.432570   30631 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0229 18:20:20.432586   30631 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0229 18:20:20.432597   30631 command_runner.go:130] > # imagestore = ""
	I0229 18:20:20.432609   30631 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0229 18:20:20.432622   30631 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0229 18:20:20.432633   30631 command_runner.go:130] > storage_driver = "overlay"
	I0229 18:20:20.432641   30631 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0229 18:20:20.432648   30631 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0229 18:20:20.432654   30631 command_runner.go:130] > storage_option = [
	I0229 18:20:20.432663   30631 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0229 18:20:20.432672   30631 command_runner.go:130] > ]
	I0229 18:20:20.432682   30631 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0229 18:20:20.432690   30631 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0229 18:20:20.432695   30631 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0229 18:20:20.432703   30631 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0229 18:20:20.432715   30631 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0229 18:20:20.432729   30631 command_runner.go:130] > # always happen on a node reboot
	I0229 18:20:20.432740   30631 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0229 18:20:20.432754   30631 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0229 18:20:20.432767   30631 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0229 18:20:20.432776   30631 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0229 18:20:20.432784   30631 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0229 18:20:20.432799   30631 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0229 18:20:20.432814   30631 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0229 18:20:20.432821   30631 command_runner.go:130] > # internal_wipe = true
	I0229 18:20:20.432835   30631 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0229 18:20:20.432848   30631 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0229 18:20:20.432855   30631 command_runner.go:130] > # internal_repair = false
	I0229 18:20:20.432868   30631 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0229 18:20:20.432878   30631 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0229 18:20:20.432891   30631 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0229 18:20:20.432900   30631 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0229 18:20:20.432907   30631 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0229 18:20:20.432911   30631 command_runner.go:130] > [crio.api]
	I0229 18:20:20.432916   30631 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0229 18:20:20.432923   30631 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0229 18:20:20.432931   30631 command_runner.go:130] > # IP address on which the stream server will listen.
	I0229 18:20:20.432942   30631 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0229 18:20:20.432954   30631 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0229 18:20:20.432966   30631 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0229 18:20:20.432973   30631 command_runner.go:130] > # stream_port = "0"
	I0229 18:20:20.432982   30631 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0229 18:20:20.432992   30631 command_runner.go:130] > # stream_enable_tls = false
	I0229 18:20:20.433002   30631 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0229 18:20:20.433009   30631 command_runner.go:130] > # stream_idle_timeout = ""
	I0229 18:20:20.433018   30631 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0229 18:20:20.433032   30631 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0229 18:20:20.433038   30631 command_runner.go:130] > # minutes.
	I0229 18:20:20.433044   30631 command_runner.go:130] > # stream_tls_cert = ""
	I0229 18:20:20.433053   30631 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0229 18:20:20.433066   30631 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0229 18:20:20.433075   30631 command_runner.go:130] > # stream_tls_key = ""
	I0229 18:20:20.433085   30631 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0229 18:20:20.433106   30631 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0229 18:20:20.433126   30631 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0229 18:20:20.433135   30631 command_runner.go:130] > # stream_tls_ca = ""
	I0229 18:20:20.433147   30631 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0229 18:20:20.433157   30631 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0229 18:20:20.433168   30631 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0229 18:20:20.433178   30631 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0229 18:20:20.433187   30631 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0229 18:20:20.433198   30631 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0229 18:20:20.433208   30631 command_runner.go:130] > [crio.runtime]
	I0229 18:20:20.433217   30631 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0229 18:20:20.433228   30631 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0229 18:20:20.433235   30631 command_runner.go:130] > # "nofile=1024:2048"
	I0229 18:20:20.433244   30631 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0229 18:20:20.433255   30631 command_runner.go:130] > # default_ulimits = [
	I0229 18:20:20.433260   30631 command_runner.go:130] > # ]
	I0229 18:20:20.433273   30631 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0229 18:20:20.433282   30631 command_runner.go:130] > # no_pivot = false
	I0229 18:20:20.433291   30631 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0229 18:20:20.433304   30631 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0229 18:20:20.433314   30631 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0229 18:20:20.433323   30631 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0229 18:20:20.433333   30631 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0229 18:20:20.433344   30631 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0229 18:20:20.433353   30631 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0229 18:20:20.433363   30631 command_runner.go:130] > # Cgroup setting for conmon
	I0229 18:20:20.433373   30631 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0229 18:20:20.433382   30631 command_runner.go:130] > conmon_cgroup = "pod"
	I0229 18:20:20.433391   30631 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0229 18:20:20.433401   30631 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0229 18:20:20.433410   30631 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0229 18:20:20.433424   30631 command_runner.go:130] > conmon_env = [
	I0229 18:20:20.433436   30631 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0229 18:20:20.433449   30631 command_runner.go:130] > ]
	I0229 18:20:20.433460   30631 command_runner.go:130] > # Additional environment variables to set for all the
	I0229 18:20:20.433468   30631 command_runner.go:130] > # containers. These are overridden if set in the
	I0229 18:20:20.433478   30631 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0229 18:20:20.433485   30631 command_runner.go:130] > # default_env = [
	I0229 18:20:20.433493   30631 command_runner.go:130] > # ]
	I0229 18:20:20.433501   30631 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0229 18:20:20.433516   30631 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0229 18:20:20.433521   30631 command_runner.go:130] > # selinux = false
	I0229 18:20:20.433530   30631 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0229 18:20:20.433539   30631 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0229 18:20:20.433553   30631 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0229 18:20:20.433563   30631 command_runner.go:130] > # seccomp_profile = ""
	I0229 18:20:20.433576   30631 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0229 18:20:20.433587   30631 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0229 18:20:20.433600   30631 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0229 18:20:20.433610   30631 command_runner.go:130] > # which might increase security.
	I0229 18:20:20.433618   30631 command_runner.go:130] > # This option is currently deprecated,
	I0229 18:20:20.433630   30631 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0229 18:20:20.433638   30631 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0229 18:20:20.433648   30631 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0229 18:20:20.433660   30631 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0229 18:20:20.433673   30631 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0229 18:20:20.433684   30631 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0229 18:20:20.433696   30631 command_runner.go:130] > # This option supports live configuration reload.
	I0229 18:20:20.433703   30631 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0229 18:20:20.433715   30631 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0229 18:20:20.433725   30631 command_runner.go:130] > # the cgroup blockio controller.
	I0229 18:20:20.433732   30631 command_runner.go:130] > # blockio_config_file = ""
	I0229 18:20:20.433744   30631 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0229 18:20:20.433751   30631 command_runner.go:130] > # blockio parameters.
	I0229 18:20:20.433759   30631 command_runner.go:130] > # blockio_reload = false
	I0229 18:20:20.433769   30631 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0229 18:20:20.433778   30631 command_runner.go:130] > # irqbalance daemon.
	I0229 18:20:20.433786   30631 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0229 18:20:20.433799   30631 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0229 18:20:20.433813   30631 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0229 18:20:20.433826   30631 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0229 18:20:20.433839   30631 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0229 18:20:20.433851   30631 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0229 18:20:20.433861   30631 command_runner.go:130] > # This option supports live configuration reload.
	I0229 18:20:20.433869   30631 command_runner.go:130] > # rdt_config_file = ""
	I0229 18:20:20.433881   30631 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0229 18:20:20.433889   30631 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0229 18:20:20.433914   30631 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0229 18:20:20.433923   30631 command_runner.go:130] > # separate_pull_cgroup = ""
	I0229 18:20:20.433931   30631 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0229 18:20:20.433943   30631 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0229 18:20:20.433947   30631 command_runner.go:130] > # will be added.
	I0229 18:20:20.433952   30631 command_runner.go:130] > # default_capabilities = [
	I0229 18:20:20.433958   30631 command_runner.go:130] > # 	"CHOWN",
	I0229 18:20:20.433962   30631 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0229 18:20:20.433966   30631 command_runner.go:130] > # 	"FSETID",
	I0229 18:20:20.433969   30631 command_runner.go:130] > # 	"FOWNER",
	I0229 18:20:20.433973   30631 command_runner.go:130] > # 	"SETGID",
	I0229 18:20:20.433976   30631 command_runner.go:130] > # 	"SETUID",
	I0229 18:20:20.433982   30631 command_runner.go:130] > # 	"SETPCAP",
	I0229 18:20:20.433986   30631 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0229 18:20:20.433990   30631 command_runner.go:130] > # 	"KILL",
	I0229 18:20:20.433993   30631 command_runner.go:130] > # ]
	I0229 18:20:20.434000   30631 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0229 18:20:20.434009   30631 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0229 18:20:20.434014   30631 command_runner.go:130] > # add_inheritable_capabilities = false
	I0229 18:20:20.434020   30631 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0229 18:20:20.434025   30631 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0229 18:20:20.434032   30631 command_runner.go:130] > # default_sysctls = [
	I0229 18:20:20.434034   30631 command_runner.go:130] > # ]
	I0229 18:20:20.434039   30631 command_runner.go:130] > # List of devices on the host that a
	I0229 18:20:20.434047   30631 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0229 18:20:20.434051   30631 command_runner.go:130] > # allowed_devices = [
	I0229 18:20:20.434056   30631 command_runner.go:130] > # 	"/dev/fuse",
	I0229 18:20:20.434060   30631 command_runner.go:130] > # ]
	I0229 18:20:20.434066   30631 command_runner.go:130] > # List of additional devices. specified as
	I0229 18:20:20.434073   30631 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0229 18:20:20.434082   30631 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0229 18:20:20.434088   30631 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0229 18:20:20.434094   30631 command_runner.go:130] > # additional_devices = [
	I0229 18:20:20.434097   30631 command_runner.go:130] > # ]
	I0229 18:20:20.434101   30631 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0229 18:20:20.434107   30631 command_runner.go:130] > # cdi_spec_dirs = [
	I0229 18:20:20.434110   30631 command_runner.go:130] > # 	"/etc/cdi",
	I0229 18:20:20.434116   30631 command_runner.go:130] > # 	"/var/run/cdi",
	I0229 18:20:20.434119   30631 command_runner.go:130] > # ]
	I0229 18:20:20.434126   30631 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0229 18:20:20.434133   30631 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0229 18:20:20.434137   30631 command_runner.go:130] > # Defaults to false.
	I0229 18:20:20.434142   30631 command_runner.go:130] > # device_ownership_from_security_context = false
	I0229 18:20:20.434148   30631 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0229 18:20:20.434154   30631 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0229 18:20:20.434158   30631 command_runner.go:130] > # hooks_dir = [
	I0229 18:20:20.434162   30631 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0229 18:20:20.434168   30631 command_runner.go:130] > # ]
	I0229 18:20:20.434174   30631 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0229 18:20:20.434181   30631 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0229 18:20:20.434187   30631 command_runner.go:130] > # its default mounts from the following two files:
	I0229 18:20:20.434192   30631 command_runner.go:130] > #
	I0229 18:20:20.434198   30631 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0229 18:20:20.434206   30631 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0229 18:20:20.434212   30631 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0229 18:20:20.434216   30631 command_runner.go:130] > #
	I0229 18:20:20.434222   30631 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0229 18:20:20.434228   30631 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0229 18:20:20.434236   30631 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0229 18:20:20.434241   30631 command_runner.go:130] > #      only add mounts it finds in this file.
	I0229 18:20:20.434245   30631 command_runner.go:130] > #
	I0229 18:20:20.434249   30631 command_runner.go:130] > # default_mounts_file = ""
	I0229 18:20:20.434256   30631 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0229 18:20:20.434262   30631 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0229 18:20:20.434268   30631 command_runner.go:130] > pids_limit = 1024
	I0229 18:20:20.434274   30631 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0229 18:20:20.434281   30631 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0229 18:20:20.434289   30631 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0229 18:20:20.434297   30631 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0229 18:20:20.434303   30631 command_runner.go:130] > # log_size_max = -1
	I0229 18:20:20.434310   30631 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0229 18:20:20.434313   30631 command_runner.go:130] > # log_to_journald = false
	I0229 18:20:20.434319   30631 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0229 18:20:20.434326   30631 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0229 18:20:20.434331   30631 command_runner.go:130] > # Path to directory for container attach sockets.
	I0229 18:20:20.434336   30631 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0229 18:20:20.434341   30631 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0229 18:20:20.434346   30631 command_runner.go:130] > # bind_mount_prefix = ""
	I0229 18:20:20.434351   30631 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0229 18:20:20.434357   30631 command_runner.go:130] > # read_only = false
	I0229 18:20:20.434362   30631 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0229 18:20:20.434370   30631 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0229 18:20:20.434374   30631 command_runner.go:130] > # live configuration reload.
	I0229 18:20:20.434380   30631 command_runner.go:130] > # log_level = "info"
	I0229 18:20:20.434385   30631 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0229 18:20:20.434389   30631 command_runner.go:130] > # This option supports live configuration reload.
	I0229 18:20:20.434395   30631 command_runner.go:130] > # log_filter = ""
	I0229 18:20:20.434401   30631 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0229 18:20:20.434412   30631 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0229 18:20:20.434424   30631 command_runner.go:130] > # separated by comma.
	I0229 18:20:20.434438   30631 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0229 18:20:20.434448   30631 command_runner.go:130] > # uid_mappings = ""
	I0229 18:20:20.434455   30631 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0229 18:20:20.434464   30631 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0229 18:20:20.434468   30631 command_runner.go:130] > # separated by comma.
	I0229 18:20:20.434475   30631 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0229 18:20:20.434478   30631 command_runner.go:130] > # gid_mappings = ""
	I0229 18:20:20.434487   30631 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0229 18:20:20.434498   30631 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0229 18:20:20.434508   30631 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0229 18:20:20.434518   30631 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0229 18:20:20.434523   30631 command_runner.go:130] > # minimum_mappable_uid = -1
	I0229 18:20:20.434530   30631 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0229 18:20:20.434536   30631 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0229 18:20:20.434541   30631 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0229 18:20:20.434548   30631 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0229 18:20:20.434552   30631 command_runner.go:130] > # minimum_mappable_gid = -1
	I0229 18:20:20.434557   30631 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0229 18:20:20.434562   30631 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0229 18:20:20.434567   30631 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0229 18:20:20.434571   30631 command_runner.go:130] > # ctr_stop_timeout = 30
	I0229 18:20:20.434576   30631 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0229 18:20:20.434582   30631 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0229 18:20:20.434586   30631 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0229 18:20:20.434590   30631 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0229 18:20:20.434594   30631 command_runner.go:130] > drop_infra_ctr = false
	I0229 18:20:20.434599   30631 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0229 18:20:20.434604   30631 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0229 18:20:20.434611   30631 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0229 18:20:20.434615   30631 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0229 18:20:20.434621   30631 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0229 18:20:20.434626   30631 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0229 18:20:20.434631   30631 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0229 18:20:20.434640   30631 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0229 18:20:20.434646   30631 command_runner.go:130] > # shared_cpuset = ""
	I0229 18:20:20.434655   30631 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0229 18:20:20.434663   30631 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0229 18:20:20.434669   30631 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0229 18:20:20.434681   30631 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0229 18:20:20.434685   30631 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0229 18:20:20.434690   30631 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0229 18:20:20.434695   30631 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0229 18:20:20.434699   30631 command_runner.go:130] > # enable_criu_support = false
	I0229 18:20:20.434704   30631 command_runner.go:130] > # Enable/disable the generation of the container,
	I0229 18:20:20.434709   30631 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0229 18:20:20.434713   30631 command_runner.go:130] > # enable_pod_events = false
	I0229 18:20:20.434719   30631 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0229 18:20:20.434728   30631 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0229 18:20:20.434736   30631 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0229 18:20:20.434744   30631 command_runner.go:130] > # default_runtime = "runc"
	I0229 18:20:20.434752   30631 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0229 18:20:20.434764   30631 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0229 18:20:20.434777   30631 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0229 18:20:20.434785   30631 command_runner.go:130] > # creation as a file is not desired either.
	I0229 18:20:20.434799   30631 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0229 18:20:20.434807   30631 command_runner.go:130] > # the hostname is being managed dynamically.
	I0229 18:20:20.434815   30631 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0229 18:20:20.434820   30631 command_runner.go:130] > # ]
	I0229 18:20:20.434830   30631 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0229 18:20:20.434839   30631 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0229 18:20:20.434847   30631 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0229 18:20:20.434854   30631 command_runner.go:130] > # Each entry in the table should follow the format:
	I0229 18:20:20.434858   30631 command_runner.go:130] > #
	I0229 18:20:20.434866   30631 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0229 18:20:20.434873   30631 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0229 18:20:20.434879   30631 command_runner.go:130] > # runtime_type = "oci"
	I0229 18:20:20.434902   30631 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0229 18:20:20.434910   30631 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0229 18:20:20.434917   30631 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0229 18:20:20.434923   30631 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0229 18:20:20.434927   30631 command_runner.go:130] > # monitor_env = []
	I0229 18:20:20.434934   30631 command_runner.go:130] > # privileged_without_host_devices = false
	I0229 18:20:20.434941   30631 command_runner.go:130] > # allowed_annotations = []
	I0229 18:20:20.434948   30631 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0229 18:20:20.434953   30631 command_runner.go:130] > # Where:
	I0229 18:20:20.434960   30631 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0229 18:20:20.434971   30631 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0229 18:20:20.434981   30631 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0229 18:20:20.434991   30631 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0229 18:20:20.434998   30631 command_runner.go:130] > #   in $PATH.
	I0229 18:20:20.435006   30631 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0229 18:20:20.435015   30631 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0229 18:20:20.435035   30631 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0229 18:20:20.435041   30631 command_runner.go:130] > #   state.
	I0229 18:20:20.435052   30631 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0229 18:20:20.435062   30631 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0229 18:20:20.435072   30631 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0229 18:20:20.435080   30631 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0229 18:20:20.435090   30631 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0229 18:20:20.435101   30631 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0229 18:20:20.435109   30631 command_runner.go:130] > #   The currently recognized values are:
	I0229 18:20:20.435119   30631 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0229 18:20:20.435131   30631 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0229 18:20:20.435139   30631 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0229 18:20:20.435148   30631 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0229 18:20:20.435158   30631 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0229 18:20:20.435164   30631 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0229 18:20:20.435170   30631 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0229 18:20:20.435176   30631 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0229 18:20:20.435182   30631 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0229 18:20:20.435191   30631 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0229 18:20:20.435197   30631 command_runner.go:130] > #   deprecated option "conmon".
	I0229 18:20:20.435206   30631 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0229 18:20:20.435214   30631 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0229 18:20:20.435223   30631 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0229 18:20:20.435231   30631 command_runner.go:130] > #   should be moved to the container's cgroup
	I0229 18:20:20.435242   30631 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0229 18:20:20.435248   30631 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0229 18:20:20.435258   30631 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0229 18:20:20.435271   30631 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0229 18:20:20.435279   30631 command_runner.go:130] > #
	I0229 18:20:20.435286   30631 command_runner.go:130] > # Using the seccomp notifier feature:
	I0229 18:20:20.435294   30631 command_runner.go:130] > #
	I0229 18:20:20.435303   30631 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0229 18:20:20.435316   30631 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0229 18:20:20.435324   30631 command_runner.go:130] > #
	I0229 18:20:20.435333   30631 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0229 18:20:20.435345   30631 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0229 18:20:20.435353   30631 command_runner.go:130] > #
	I0229 18:20:20.435361   30631 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0229 18:20:20.435370   30631 command_runner.go:130] > # feature.
	I0229 18:20:20.435375   30631 command_runner.go:130] > #
	I0229 18:20:20.435385   30631 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0229 18:20:20.435395   30631 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0229 18:20:20.435407   30631 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0229 18:20:20.435425   30631 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0229 18:20:20.435437   30631 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0229 18:20:20.435446   30631 command_runner.go:130] > #
	I0229 18:20:20.435455   30631 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0229 18:20:20.435469   30631 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0229 18:20:20.435476   30631 command_runner.go:130] > #
	I0229 18:20:20.435485   30631 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0229 18:20:20.435497   30631 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0229 18:20:20.435505   30631 command_runner.go:130] > #
	I0229 18:20:20.435516   30631 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0229 18:20:20.435528   30631 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0229 18:20:20.435537   30631 command_runner.go:130] > # limitation.
	I0229 18:20:20.435543   30631 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0229 18:20:20.435553   30631 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0229 18:20:20.435557   30631 command_runner.go:130] > runtime_type = "oci"
	I0229 18:20:20.435561   30631 command_runner.go:130] > runtime_root = "/run/runc"
	I0229 18:20:20.435567   30631 command_runner.go:130] > runtime_config_path = ""
	I0229 18:20:20.435571   30631 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0229 18:20:20.435575   30631 command_runner.go:130] > monitor_cgroup = "pod"
	I0229 18:20:20.435581   30631 command_runner.go:130] > monitor_exec_cgroup = ""
	I0229 18:20:20.435585   30631 command_runner.go:130] > monitor_env = [
	I0229 18:20:20.435590   30631 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0229 18:20:20.435596   30631 command_runner.go:130] > ]
	I0229 18:20:20.435601   30631 command_runner.go:130] > privileged_without_host_devices = false
	I0229 18:20:20.435607   30631 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0229 18:20:20.435615   30631 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0229 18:20:20.435621   30631 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0229 18:20:20.435630   30631 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0229 18:20:20.435637   30631 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0229 18:20:20.435645   30631 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0229 18:20:20.435654   30631 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0229 18:20:20.435665   30631 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0229 18:20:20.435671   30631 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0229 18:20:20.435678   30631 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0229 18:20:20.435684   30631 command_runner.go:130] > # Example:
	I0229 18:20:20.435689   30631 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0229 18:20:20.435694   30631 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0229 18:20:20.435701   30631 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0229 18:20:20.435705   30631 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0229 18:20:20.435709   30631 command_runner.go:130] > # cpuset = 0
	I0229 18:20:20.435713   30631 command_runner.go:130] > # cpushares = "0-1"
	I0229 18:20:20.435716   30631 command_runner.go:130] > # Where:
	I0229 18:20:20.435721   30631 command_runner.go:130] > # The workload name is workload-type.
	I0229 18:20:20.435727   30631 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0229 18:20:20.435735   30631 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0229 18:20:20.435740   30631 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0229 18:20:20.435748   30631 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0229 18:20:20.435755   30631 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0229 18:20:20.435762   30631 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0229 18:20:20.435768   30631 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0229 18:20:20.435775   30631 command_runner.go:130] > # Default value is set to true
	I0229 18:20:20.435779   30631 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0229 18:20:20.435787   30631 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0229 18:20:20.435792   30631 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0229 18:20:20.435798   30631 command_runner.go:130] > # Default value is set to 'false'
	I0229 18:20:20.435803   30631 command_runner.go:130] > # disable_hostport_mapping = false
	I0229 18:20:20.435809   30631 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0229 18:20:20.435815   30631 command_runner.go:130] > #
	I0229 18:20:20.435820   30631 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0229 18:20:20.435828   30631 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0229 18:20:20.435834   30631 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0229 18:20:20.435842   30631 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0229 18:20:20.435848   30631 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0229 18:20:20.435854   30631 command_runner.go:130] > [crio.image]
	I0229 18:20:20.435859   30631 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0229 18:20:20.435865   30631 command_runner.go:130] > # default_transport = "docker://"
	I0229 18:20:20.435871   30631 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0229 18:20:20.435878   30631 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0229 18:20:20.435884   30631 command_runner.go:130] > # global_auth_file = ""
	I0229 18:20:20.435888   30631 command_runner.go:130] > # The image used to instantiate infra containers.
	I0229 18:20:20.435895   30631 command_runner.go:130] > # This option supports live configuration reload.
	I0229 18:20:20.435900   30631 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0229 18:20:20.435906   30631 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0229 18:20:20.435914   30631 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0229 18:20:20.435918   30631 command_runner.go:130] > # This option supports live configuration reload.
	I0229 18:20:20.435923   30631 command_runner.go:130] > # pause_image_auth_file = ""
	I0229 18:20:20.435928   30631 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0229 18:20:20.435936   30631 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0229 18:20:20.435942   30631 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0229 18:20:20.435950   30631 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0229 18:20:20.435954   30631 command_runner.go:130] > # pause_command = "/pause"
	I0229 18:20:20.435961   30631 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0229 18:20:20.435968   30631 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0229 18:20:20.435979   30631 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0229 18:20:20.435989   30631 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0229 18:20:20.436004   30631 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0229 18:20:20.436014   30631 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0229 18:20:20.436023   30631 command_runner.go:130] > # pinned_images = [
	I0229 18:20:20.436027   30631 command_runner.go:130] > # ]
	I0229 18:20:20.436037   30631 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0229 18:20:20.436049   30631 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0229 18:20:20.436061   30631 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0229 18:20:20.436071   30631 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0229 18:20:20.436081   30631 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0229 18:20:20.436090   30631 command_runner.go:130] > # signature_policy = ""
	I0229 18:20:20.436102   30631 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0229 18:20:20.436113   30631 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0229 18:20:20.436122   30631 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0229 18:20:20.436128   30631 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0229 18:20:20.436134   30631 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0229 18:20:20.436139   30631 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0229 18:20:20.436147   30631 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0229 18:20:20.436154   30631 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0229 18:20:20.436161   30631 command_runner.go:130] > # changing them here.
	I0229 18:20:20.436165   30631 command_runner.go:130] > # insecure_registries = [
	I0229 18:20:20.436168   30631 command_runner.go:130] > # ]
	I0229 18:20:20.436175   30631 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0229 18:20:20.436180   30631 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0229 18:20:20.436186   30631 command_runner.go:130] > # image_volumes = "mkdir"
	I0229 18:20:20.436191   30631 command_runner.go:130] > # Temporary directory to use for storing big files
	I0229 18:20:20.436195   30631 command_runner.go:130] > # big_files_temporary_dir = ""
	I0229 18:20:20.436201   30631 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0229 18:20:20.436207   30631 command_runner.go:130] > # CNI plugins.
	I0229 18:20:20.436211   30631 command_runner.go:130] > [crio.network]
	I0229 18:20:20.436219   30631 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0229 18:20:20.436224   30631 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0229 18:20:20.436229   30631 command_runner.go:130] > # cni_default_network = ""
	I0229 18:20:20.436234   30631 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0229 18:20:20.436241   30631 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0229 18:20:20.436246   30631 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0229 18:20:20.436251   30631 command_runner.go:130] > # plugin_dirs = [
	I0229 18:20:20.436255   30631 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0229 18:20:20.436258   30631 command_runner.go:130] > # ]
	I0229 18:20:20.436265   30631 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0229 18:20:20.436269   30631 command_runner.go:130] > [crio.metrics]
	I0229 18:20:20.436274   30631 command_runner.go:130] > # Globally enable or disable metrics support.
	I0229 18:20:20.436279   30631 command_runner.go:130] > enable_metrics = true
	I0229 18:20:20.436284   30631 command_runner.go:130] > # Specify enabled metrics collectors.
	I0229 18:20:20.436291   30631 command_runner.go:130] > # Per default all metrics are enabled.
	I0229 18:20:20.436296   30631 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0229 18:20:20.436304   30631 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0229 18:20:20.436310   30631 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0229 18:20:20.436316   30631 command_runner.go:130] > # metrics_collectors = [
	I0229 18:20:20.436320   30631 command_runner.go:130] > # 	"operations",
	I0229 18:20:20.436326   30631 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0229 18:20:20.436330   30631 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0229 18:20:20.436335   30631 command_runner.go:130] > # 	"operations_errors",
	I0229 18:20:20.436339   30631 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0229 18:20:20.436345   30631 command_runner.go:130] > # 	"image_pulls_by_name",
	I0229 18:20:20.436350   30631 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0229 18:20:20.436354   30631 command_runner.go:130] > # 	"image_pulls_failures",
	I0229 18:20:20.436358   30631 command_runner.go:130] > # 	"image_pulls_successes",
	I0229 18:20:20.436364   30631 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0229 18:20:20.436368   30631 command_runner.go:130] > # 	"image_layer_reuse",
	I0229 18:20:20.436375   30631 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0229 18:20:20.436379   30631 command_runner.go:130] > # 	"containers_oom_total",
	I0229 18:20:20.436385   30631 command_runner.go:130] > # 	"containers_oom",
	I0229 18:20:20.436388   30631 command_runner.go:130] > # 	"processes_defunct",
	I0229 18:20:20.436395   30631 command_runner.go:130] > # 	"operations_total",
	I0229 18:20:20.436399   30631 command_runner.go:130] > # 	"operations_latency_seconds",
	I0229 18:20:20.436404   30631 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0229 18:20:20.436408   30631 command_runner.go:130] > # 	"operations_errors_total",
	I0229 18:20:20.436412   30631 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0229 18:20:20.436420   30631 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0229 18:20:20.436426   30631 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0229 18:20:20.436431   30631 command_runner.go:130] > # 	"image_pulls_success_total",
	I0229 18:20:20.436435   30631 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0229 18:20:20.436439   30631 command_runner.go:130] > # 	"containers_oom_count_total",
	I0229 18:20:20.436445   30631 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0229 18:20:20.436449   30631 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0229 18:20:20.436455   30631 command_runner.go:130] > # ]
	I0229 18:20:20.436460   30631 command_runner.go:130] > # The port on which the metrics server will listen.
	I0229 18:20:20.436466   30631 command_runner.go:130] > # metrics_port = 9090
	I0229 18:20:20.436472   30631 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0229 18:20:20.436481   30631 command_runner.go:130] > # metrics_socket = ""
	I0229 18:20:20.436489   30631 command_runner.go:130] > # The certificate for the secure metrics server.
	I0229 18:20:20.436501   30631 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0229 18:20:20.436514   30631 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0229 18:20:20.436524   30631 command_runner.go:130] > # certificate on any modification event.
	I0229 18:20:20.436529   30631 command_runner.go:130] > # metrics_cert = ""
	I0229 18:20:20.436540   30631 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0229 18:20:20.436551   30631 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0229 18:20:20.436560   30631 command_runner.go:130] > # metrics_key = ""
	I0229 18:20:20.436569   30631 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0229 18:20:20.436578   30631 command_runner.go:130] > [crio.tracing]
	I0229 18:20:20.436586   30631 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0229 18:20:20.436595   30631 command_runner.go:130] > # enable_tracing = false
	I0229 18:20:20.436603   30631 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0229 18:20:20.436613   30631 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0229 18:20:20.436626   30631 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0229 18:20:20.436638   30631 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0229 18:20:20.436647   30631 command_runner.go:130] > # CRI-O NRI configuration.
	I0229 18:20:20.436653   30631 command_runner.go:130] > [crio.nri]
	I0229 18:20:20.436663   30631 command_runner.go:130] > # Globally enable or disable NRI.
	I0229 18:20:20.436670   30631 command_runner.go:130] > # enable_nri = false
	I0229 18:20:20.436680   30631 command_runner.go:130] > # NRI socket to listen on.
	I0229 18:20:20.436689   30631 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0229 18:20:20.436698   30631 command_runner.go:130] > # NRI plugin directory to use.
	I0229 18:20:20.436706   30631 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0229 18:20:20.436717   30631 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0229 18:20:20.436727   30631 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0229 18:20:20.436736   30631 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0229 18:20:20.436746   30631 command_runner.go:130] > # nri_disable_connections = false
	I0229 18:20:20.436753   30631 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0229 18:20:20.436762   30631 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0229 18:20:20.436773   30631 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0229 18:20:20.436783   30631 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0229 18:20:20.436796   30631 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0229 18:20:20.436805   30631 command_runner.go:130] > [crio.stats]
	I0229 18:20:20.436814   30631 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0229 18:20:20.436824   30631 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0229 18:20:20.436829   30631 command_runner.go:130] > # stats_collection_period = 0
	I0229 18:20:20.437264   30631 command_runner.go:130] ! time="2024-02-29 18:20:20.393297973Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0229 18:20:20.437289   30631 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0229 18:20:20.437574   30631 cni.go:84] Creating CNI manager for ""
	I0229 18:20:20.437586   30631 cni.go:136] 3 nodes found, recommending kindnet
	I0229 18:20:20.437626   30631 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 18:20:20.437651   30631 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.104 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-051105 NodeName:multinode-051105-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.200"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.104 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 18:20:20.437798   30631 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.104
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-051105-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.104
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.200"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 18:20:20.437868   30631 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-051105-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.104
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-051105 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 18:20:20.437944   30631 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0229 18:20:20.449313   30631 command_runner.go:130] > kubeadm
	I0229 18:20:20.449336   30631 command_runner.go:130] > kubectl
	I0229 18:20:20.449342   30631 command_runner.go:130] > kubelet
	I0229 18:20:20.449377   30631 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 18:20:20.449433   30631 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0229 18:20:20.460617   30631 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0229 18:20:20.480067   30631 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 18:20:20.500050   30631 ssh_runner.go:195] Run: grep 192.168.39.200	control-plane.minikube.internal$ /etc/hosts
	I0229 18:20:20.504692   30631 command_runner.go:130] > 192.168.39.200	control-plane.minikube.internal
	I0229 18:20:20.504808   30631 host.go:66] Checking if "multinode-051105" exists ...
	I0229 18:20:20.505104   30631 config.go:182] Loaded profile config "multinode-051105": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 18:20:20.505220   30631 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 18:20:20.505261   30631 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:20:20.521906   30631 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34953
	I0229 18:20:20.522273   30631 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:20:20.522716   30631 main.go:141] libmachine: Using API Version  1
	I0229 18:20:20.522741   30631 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:20:20.523017   30631 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:20:20.523242   30631 main.go:141] libmachine: (multinode-051105) Calling .DriverName
	I0229 18:20:20.523384   30631 start.go:304] JoinCluster: &{Name:multinode-051105 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:multinode-051105 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.104 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.78 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:20:20.523528   30631 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0229 18:20:20.523549   30631 main.go:141] libmachine: (multinode-051105) Calling .GetSSHHostname
	I0229 18:20:20.526233   30631 main.go:141] libmachine: (multinode-051105) DBG | domain multinode-051105 has defined MAC address 52:54:00:58:1f:e6 in network mk-multinode-051105
	I0229 18:20:20.526617   30631 main.go:141] libmachine: (multinode-051105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:1f:e6", ip: ""} in network mk-multinode-051105: {Iface:virbr1 ExpiryTime:2024-02-29 19:17:58 +0000 UTC Type:0 Mac:52:54:00:58:1f:e6 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:multinode-051105 Clientid:01:52:54:00:58:1f:e6}
	I0229 18:20:20.526645   30631 main.go:141] libmachine: (multinode-051105) DBG | domain multinode-051105 has defined IP address 192.168.39.200 and MAC address 52:54:00:58:1f:e6 in network mk-multinode-051105
	I0229 18:20:20.526767   30631 main.go:141] libmachine: (multinode-051105) Calling .GetSSHPort
	I0229 18:20:20.526927   30631 main.go:141] libmachine: (multinode-051105) Calling .GetSSHKeyPath
	I0229 18:20:20.527064   30631 main.go:141] libmachine: (multinode-051105) Calling .GetSSHUsername
	I0229 18:20:20.527179   30631 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/multinode-051105/id_rsa Username:docker}
	I0229 18:20:20.727530   30631 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token cpz64u.kdboxucs3z43setf --discovery-token-ca-cert-hash sha256:aeecb2c9da879b7c90530698bac219eb9b0e3de8de59b6b65ba867f984e81782 
	I0229 18:20:20.728425   30631 start.go:317] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.168.39.104 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0229 18:20:20.728460   30631 host.go:66] Checking if "multinode-051105" exists ...
	I0229 18:20:20.728735   30631 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 18:20:20.728774   30631 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:20:20.743309   30631 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46859
	I0229 18:20:20.743849   30631 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:20:20.744325   30631 main.go:141] libmachine: Using API Version  1
	I0229 18:20:20.744344   30631 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:20:20.744709   30631 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:20:20.744869   30631 main.go:141] libmachine: (multinode-051105) Calling .DriverName
	I0229 18:20:20.745039   30631 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-051105-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0229 18:20:20.745058   30631 main.go:141] libmachine: (multinode-051105) Calling .GetSSHHostname
	I0229 18:20:20.747886   30631 main.go:141] libmachine: (multinode-051105) DBG | domain multinode-051105 has defined MAC address 52:54:00:58:1f:e6 in network mk-multinode-051105
	I0229 18:20:20.748283   30631 main.go:141] libmachine: (multinode-051105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:1f:e6", ip: ""} in network mk-multinode-051105: {Iface:virbr1 ExpiryTime:2024-02-29 19:17:58 +0000 UTC Type:0 Mac:52:54:00:58:1f:e6 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:multinode-051105 Clientid:01:52:54:00:58:1f:e6}
	I0229 18:20:20.748302   30631 main.go:141] libmachine: (multinode-051105) DBG | domain multinode-051105 has defined IP address 192.168.39.200 and MAC address 52:54:00:58:1f:e6 in network mk-multinode-051105
	I0229 18:20:20.748469   30631 main.go:141] libmachine: (multinode-051105) Calling .GetSSHPort
	I0229 18:20:20.748625   30631 main.go:141] libmachine: (multinode-051105) Calling .GetSSHKeyPath
	I0229 18:20:20.748756   30631 main.go:141] libmachine: (multinode-051105) Calling .GetSSHUsername
	I0229 18:20:20.748891   30631 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/multinode-051105/id_rsa Username:docker}
	I0229 18:20:20.942761   30631 command_runner.go:130] > node/multinode-051105-m02 cordoned
	I0229 18:20:23.983711   30631 command_runner.go:130] > pod "busybox-5b5d89c9d6-m9jth" has DeletionTimestamp older than 1 seconds, skipping
	I0229 18:20:23.983732   30631 command_runner.go:130] > node/multinode-051105-m02 drained
	I0229 18:20:23.985848   30631 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0229 18:20:23.985866   30631 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-c2ztr, kube-system/kube-proxy-cbl8s
	I0229 18:20:23.986172   30631 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-051105-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.241105003s)
	I0229 18:20:23.986193   30631 node.go:108] successfully drained node "m02"
	I0229 18:20:23.986571   30631 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18259-6428/kubeconfig
	I0229 18:20:23.986772   30631 kapi.go:59] client config for multinode-051105: &rest.Config{Host:"https://192.168.39.200:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18259-6428/.minikube/profiles/multinode-051105/client.crt", KeyFile:"/home/jenkins/minikube-integration/18259-6428/.minikube/profiles/multinode-051105/client.key", CAFile:"/home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5d0e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 18:20:23.987184   30631 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0229 18:20:23.987233   30631 round_trippers.go:463] DELETE https://192.168.39.200:8443/api/v1/nodes/multinode-051105-m02
	I0229 18:20:23.987238   30631 round_trippers.go:469] Request Headers:
	I0229 18:20:23.987245   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:20:23.987252   30631 round_trippers.go:473]     Content-Type: application/json
	I0229 18:20:23.987255   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:20:24.001521   30631 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0229 18:20:24.001549   30631 round_trippers.go:577] Response Headers:
	I0229 18:20:24.001559   30631 round_trippers.go:580]     Audit-Id: 330c9b65-0b50-4639-ad2c-5cf02077cf7d
	I0229 18:20:24.001566   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:20:24.001570   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:20:24.001575   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:20:24.001579   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:20:24.001583   30631 round_trippers.go:580]     Content-Length: 171
	I0229 18:20:24.001587   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:20:23 GMT
	I0229 18:20:24.001609   30631 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-051105-m02","kind":"nodes","uid":"d9c0ff3f-8bc0-4054-a484-27b1793b2e4e"}}
	I0229 18:20:24.001646   30631 node.go:124] successfully deleted node "m02"
	I0229 18:20:24.001658   30631 start.go:321] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.168.39.104 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0229 18:20:24.001682   30631 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.104 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0229 18:20:24.001701   30631 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token cpz64u.kdboxucs3z43setf --discovery-token-ca-cert-hash sha256:aeecb2c9da879b7c90530698bac219eb9b0e3de8de59b6b65ba867f984e81782 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-051105-m02"
	I0229 18:20:24.063895   30631 command_runner.go:130] > [preflight] Running pre-flight checks
	I0229 18:20:24.240751   30631 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0229 18:20:24.240790   30631 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0229 18:20:24.315366   30631 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 18:20:24.315618   30631 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 18:20:24.315944   30631 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0229 18:20:24.465116   30631 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0229 18:20:24.993181   30631 command_runner.go:130] > This node has joined the cluster:
	I0229 18:20:24.993212   30631 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0229 18:20:24.993223   30631 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0229 18:20:24.993233   30631 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0229 18:20:24.995873   30631 command_runner.go:130] ! W0229 18:20:24.031851    2602 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0229 18:20:24.995909   30631 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0229 18:20:24.995920   30631 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0229 18:20:24.995935   30631 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0229 18:20:24.995966   30631 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0229 18:20:25.287044   30631 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f9e09574fdbf0719156a1e892f7aeb8b71f0cf19 minikube.k8s.io/name=multinode-051105 minikube.k8s.io/updated_at=2024_02_29T18_20_25_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:20:25.387556   30631 command_runner.go:130] > node/multinode-051105-m02 labeled
	I0229 18:20:25.395355   30631 command_runner.go:130] > node/multinode-051105-m03 labeled
	I0229 18:20:25.397166   30631 start.go:306] JoinCluster complete in 4.873778059s
	I0229 18:20:25.397195   30631 cni.go:84] Creating CNI manager for ""
	I0229 18:20:25.397203   30631 cni.go:136] 3 nodes found, recommending kindnet
	I0229 18:20:25.397262   30631 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0229 18:20:25.403494   30631 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0229 18:20:25.403518   30631 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0229 18:20:25.403526   30631 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0229 18:20:25.403536   30631 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0229 18:20:25.403545   30631 command_runner.go:130] > Access: 2024-02-29 18:17:58.604411532 +0000
	I0229 18:20:25.403553   30631 command_runner.go:130] > Modify: 2024-02-23 03:39:37.000000000 +0000
	I0229 18:20:25.403563   30631 command_runner.go:130] > Change: 2024-02-29 18:17:57.283411532 +0000
	I0229 18:20:25.403568   30631 command_runner.go:130] >  Birth: -
	I0229 18:20:25.403749   30631 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0229 18:20:25.403766   30631 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0229 18:20:25.423776   30631 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0229 18:20:25.783572   30631 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0229 18:20:25.783594   30631 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0229 18:20:25.783599   30631 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0229 18:20:25.783604   30631 command_runner.go:130] > daemonset.apps/kindnet configured
	I0229 18:20:25.783937   30631 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18259-6428/kubeconfig
	I0229 18:20:25.784182   30631 kapi.go:59] client config for multinode-051105: &rest.Config{Host:"https://192.168.39.200:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18259-6428/.minikube/profiles/multinode-051105/client.crt", KeyFile:"/home/jenkins/minikube-integration/18259-6428/.minikube/profiles/multinode-051105/client.key", CAFile:"/home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5d0e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 18:20:25.784473   30631 round_trippers.go:463] GET https://192.168.39.200:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0229 18:20:25.784486   30631 round_trippers.go:469] Request Headers:
	I0229 18:20:25.784493   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:20:25.784498   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:20:25.788584   30631 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 18:20:25.788601   30631 round_trippers.go:577] Response Headers:
	I0229 18:20:25.788607   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:20:25 GMT
	I0229 18:20:25.788611   30631 round_trippers.go:580]     Audit-Id: a4e823db-d019-4183-8335-9183065a0029
	I0229 18:20:25.788614   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:20:25.788617   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:20:25.788620   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:20:25.788623   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:20:25.788628   30631 round_trippers.go:580]     Content-Length: 291
	I0229 18:20:25.788745   30631 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"980f57f9-4c9b-43a5-b35c-61bcb3268764","resourceVersion":"962","creationTimestamp":"2024-02-29T18:07:02Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0229 18:20:25.788825   30631 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-051105" context rescaled to 1 replicas
	I0229 18:20:25.788851   30631 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.104 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0229 18:20:25.790718   30631 out.go:177] * Verifying Kubernetes components...
	I0229 18:20:25.792052   30631 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 18:20:25.809707   30631 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18259-6428/kubeconfig
	I0229 18:20:25.809929   30631 kapi.go:59] client config for multinode-051105: &rest.Config{Host:"https://192.168.39.200:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18259-6428/.minikube/profiles/multinode-051105/client.crt", KeyFile:"/home/jenkins/minikube-integration/18259-6428/.minikube/profiles/multinode-051105/client.key", CAFile:"/home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5d0e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 18:20:25.810132   30631 node_ready.go:35] waiting up to 6m0s for node "multinode-051105-m02" to be "Ready" ...
	I0229 18:20:25.810208   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/multinode-051105-m02
	I0229 18:20:25.810216   30631 round_trippers.go:469] Request Headers:
	I0229 18:20:25.810223   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:20:25.810233   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:20:25.812910   30631 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:20:25.812924   30631 round_trippers.go:577] Response Headers:
	I0229 18:20:25.812930   30631 round_trippers.go:580]     Audit-Id: 871187ee-6ed3-4f30-b140-18435599cbb4
	I0229 18:20:25.812934   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:20:25.812937   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:20:25.812950   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:20:25.812957   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:20:25.812960   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:20:25 GMT
	I0229 18:20:25.813278   30631 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-051105-m02","uid":"ccce9f48-c73f-4045-b0aa-ccc8f0ee366c","resourceVersion":"1114","creationTimestamp":"2024-02-29T18:20:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-051105-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-051105","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T18_20_25_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:20:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3994 chars]
	I0229 18:20:25.813598   30631 node_ready.go:49] node "multinode-051105-m02" has status "Ready":"True"
	I0229 18:20:25.813614   30631 node_ready.go:38] duration metric: took 3.468445ms waiting for node "multinode-051105-m02" to be "Ready" ...
	I0229 18:20:25.813623   30631 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 18:20:25.813667   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods
	I0229 18:20:25.813675   30631 round_trippers.go:469] Request Headers:
	I0229 18:20:25.813681   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:20:25.813685   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:20:25.817282   30631 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 18:20:25.817301   30631 round_trippers.go:577] Response Headers:
	I0229 18:20:25.817310   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:20:25.817316   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:20:25.817322   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:20:25.817329   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:20:25.817333   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:20:25 GMT
	I0229 18:20:25.817338   30631 round_trippers.go:580]     Audit-Id: 24400bad-8d56-46c4-8b9a-fdfe0579c067
	I0229 18:20:25.818905   30631 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1121"},"items":[{"metadata":{"name":"coredns-5dd5756b68-bwhnb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a3853502-49ad-4d24-8c63-3000e4f4aa8e","resourceVersion":"958","creationTimestamp":"2024-02-29T18:07:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"706ca334-c077-4f00-987d-5093fda91a27","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:07:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"706ca334-c077-4f00-987d-5093fda91a27\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 81865 chars]
	I0229 18:20:25.821229   30631 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-bwhnb" in "kube-system" namespace to be "Ready" ...
	I0229 18:20:25.821292   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bwhnb
	I0229 18:20:25.821300   30631 round_trippers.go:469] Request Headers:
	I0229 18:20:25.821307   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:20:25.821310   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:20:25.823517   30631 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:20:25.823534   30631 round_trippers.go:577] Response Headers:
	I0229 18:20:25.823541   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:20:25.823546   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:20:25 GMT
	I0229 18:20:25.823550   30631 round_trippers.go:580]     Audit-Id: 39a4a6df-9d5d-4762-9dde-6e0bb8b30ddb
	I0229 18:20:25.823555   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:20:25.823560   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:20:25.823564   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:20:25.823814   30631 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bwhnb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a3853502-49ad-4d24-8c63-3000e4f4aa8e","resourceVersion":"958","creationTimestamp":"2024-02-29T18:07:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"706ca334-c077-4f00-987d-5093fda91a27","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:07:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"706ca334-c077-4f00-987d-5093fda91a27\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6226 chars]
	I0229 18:20:25.824204   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/multinode-051105
	I0229 18:20:25.824218   30631 round_trippers.go:469] Request Headers:
	I0229 18:20:25.824228   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:20:25.824233   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:20:25.826236   30631 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0229 18:20:25.826252   30631 round_trippers.go:577] Response Headers:
	I0229 18:20:25.826261   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:20:25 GMT
	I0229 18:20:25.826267   30631 round_trippers.go:580]     Audit-Id: a3ca5a26-0b0a-4706-a3e2-326bea688a9e
	I0229 18:20:25.826271   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:20:25.826277   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:20:25.826282   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:20:25.826288   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:20:25.826603   30631 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-051105","uid":"614122aa-9203-4f41-a34b-07331562af09","resourceVersion":"975","creationTimestamp":"2024-02-29T18:06:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-051105","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-051105","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_07_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T18:06:59Z","fieldsType":"FieldsV1","fiel [truncated 6496 chars]
	I0229 18:20:25.826961   30631 pod_ready.go:92] pod "coredns-5dd5756b68-bwhnb" in "kube-system" namespace has status "Ready":"True"
	I0229 18:20:25.826978   30631 pod_ready.go:81] duration metric: took 5.729389ms waiting for pod "coredns-5dd5756b68-bwhnb" in "kube-system" namespace to be "Ready" ...
	I0229 18:20:25.826986   30631 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-051105" in "kube-system" namespace to be "Ready" ...
	I0229 18:20:25.827059   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-051105
	I0229 18:20:25.827072   30631 round_trippers.go:469] Request Headers:
	I0229 18:20:25.827081   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:20:25.827086   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:20:25.828925   30631 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0229 18:20:25.828940   30631 round_trippers.go:577] Response Headers:
	I0229 18:20:25.828948   30631 round_trippers.go:580]     Audit-Id: 0af5aaf1-3ede-4121-b70b-1ae7ea07ed5e
	I0229 18:20:25.828955   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:20:25.828960   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:20:25.828965   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:20:25.828969   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:20:25.828974   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:20:25 GMT
	I0229 18:20:25.829275   30631 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-051105","namespace":"kube-system","uid":"e73d8125-9770-4ddf-a382-a19adc1ed94f","resourceVersion":"948","creationTimestamp":"2024-02-29T18:07:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.200:2379","kubernetes.io/config.hash":"a3ee17954369c56d68a333413809975f","kubernetes.io/config.mirror":"a3ee17954369c56d68a333413809975f","kubernetes.io/config.seen":"2024-02-29T18:06:55.285569285Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-051105","uid":"614122aa-9203-4f41-a34b-07331562af09","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:07:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5825 chars]
	I0229 18:20:25.829595   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/multinode-051105
	I0229 18:20:25.829608   30631 round_trippers.go:469] Request Headers:
	I0229 18:20:25.829616   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:20:25.829621   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:20:25.831448   30631 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0229 18:20:25.831466   30631 round_trippers.go:577] Response Headers:
	I0229 18:20:25.831474   30631 round_trippers.go:580]     Audit-Id: d1e1df2e-f98a-4e91-b458-897ef3080b2e
	I0229 18:20:25.831480   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:20:25.831484   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:20:25.831487   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:20:25.831491   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:20:25.831495   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:20:25 GMT
	I0229 18:20:25.831667   30631 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-051105","uid":"614122aa-9203-4f41-a34b-07331562af09","resourceVersion":"975","creationTimestamp":"2024-02-29T18:06:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-051105","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-051105","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_07_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T18:06:59Z","fieldsType":"FieldsV1","fiel [truncated 6496 chars]
	I0229 18:20:25.831932   30631 pod_ready.go:92] pod "etcd-multinode-051105" in "kube-system" namespace has status "Ready":"True"
	I0229 18:20:25.831946   30631 pod_ready.go:81] duration metric: took 4.95031ms waiting for pod "etcd-multinode-051105" in "kube-system" namespace to be "Ready" ...
	I0229 18:20:25.831967   30631 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-051105" in "kube-system" namespace to be "Ready" ...
	I0229 18:20:25.832021   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-051105
	I0229 18:20:25.832031   30631 round_trippers.go:469] Request Headers:
	I0229 18:20:25.832042   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:20:25.832051   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:20:25.833632   30631 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0229 18:20:25.833648   30631 round_trippers.go:577] Response Headers:
	I0229 18:20:25.833656   30631 round_trippers.go:580]     Audit-Id: fe3b33dc-7d90-454f-a9c0-754e7f8a60ba
	I0229 18:20:25.833663   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:20:25.833667   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:20:25.833671   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:20:25.833675   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:20:25.833678   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:20:25 GMT
	I0229 18:20:25.833940   30631 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-051105","namespace":"kube-system","uid":"722abb81-d303-4fa9-bcbb-8c16aaf4421d","resourceVersion":"925","creationTimestamp":"2024-02-29T18:07:02Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.200:8443","kubernetes.io/config.hash":"716aea331c832180bd818bead2d6fe09","kubernetes.io/config.mirror":"716aea331c832180bd818bead2d6fe09","kubernetes.io/config.seen":"2024-02-29T18:07:02.423715355Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-051105","uid":"614122aa-9203-4f41-a34b-07331562af09","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:07:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7351 chars]
	I0229 18:20:25.834360   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/multinode-051105
	I0229 18:20:25.834375   30631 round_trippers.go:469] Request Headers:
	I0229 18:20:25.834382   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:20:25.834385   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:20:25.836216   30631 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0229 18:20:25.836228   30631 round_trippers.go:577] Response Headers:
	I0229 18:20:25.836234   30631 round_trippers.go:580]     Audit-Id: 24e564f2-971d-4b31-8243-250a64712f1a
	I0229 18:20:25.836238   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:20:25.836241   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:20:25.836243   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:20:25.836247   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:20:25.836254   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:20:25 GMT
	I0229 18:20:25.836406   30631 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-051105","uid":"614122aa-9203-4f41-a34b-07331562af09","resourceVersion":"975","creationTimestamp":"2024-02-29T18:06:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-051105","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-051105","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_07_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T18:06:59Z","fieldsType":"FieldsV1","fiel [truncated 6496 chars]
	I0229 18:20:25.836755   30631 pod_ready.go:92] pod "kube-apiserver-multinode-051105" in "kube-system" namespace has status "Ready":"True"
	I0229 18:20:25.836772   30631 pod_ready.go:81] duration metric: took 4.793236ms waiting for pod "kube-apiserver-multinode-051105" in "kube-system" namespace to be "Ready" ...
	I0229 18:20:25.836780   30631 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-051105" in "kube-system" namespace to be "Ready" ...
	I0229 18:20:25.836820   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-051105
	I0229 18:20:25.836828   30631 round_trippers.go:469] Request Headers:
	I0229 18:20:25.836834   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:20:25.836838   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:20:25.838701   30631 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0229 18:20:25.838711   30631 round_trippers.go:577] Response Headers:
	I0229 18:20:25.838716   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:20:25.838718   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:20:25.838721   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:20:25.838723   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:20:25 GMT
	I0229 18:20:25.838726   30631 round_trippers.go:580]     Audit-Id: cce57930-e6e5-4362-a166-8d8c55ce3538
	I0229 18:20:25.838729   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:20:25.838923   30631 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-051105","namespace":"kube-system","uid":"a3156cba-a585-47c6-8b26-2069af0021ce","resourceVersion":"929","creationTimestamp":"2024-02-29T18:07:00Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"12776d77f75f6cff787ef977dae61db7","kubernetes.io/config.mirror":"12776d77f75f6cff787ef977dae61db7","kubernetes.io/config.seen":"2024-02-29T18:06:55.285572192Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-051105","uid":"614122aa-9203-4f41-a34b-07331562af09","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:07:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6907 chars]
	I0229 18:20:25.839266   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/multinode-051105
	I0229 18:20:25.839280   30631 round_trippers.go:469] Request Headers:
	I0229 18:20:25.839286   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:20:25.839291   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:20:25.840891   30631 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0229 18:20:25.840908   30631 round_trippers.go:577] Response Headers:
	I0229 18:20:25.840916   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:20:25.840920   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:20:25.840923   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:20:25 GMT
	I0229 18:20:25.840936   30631 round_trippers.go:580]     Audit-Id: 614f4b83-9575-48ec-84b9-0e37a8715226
	I0229 18:20:25.840940   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:20:25.840944   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:20:25.841306   30631 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-051105","uid":"614122aa-9203-4f41-a34b-07331562af09","resourceVersion":"975","creationTimestamp":"2024-02-29T18:06:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-051105","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-051105","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_07_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T18:06:59Z","fieldsType":"FieldsV1","fiel [truncated 6496 chars]
	I0229 18:20:25.841562   30631 pod_ready.go:92] pod "kube-controller-manager-multinode-051105" in "kube-system" namespace has status "Ready":"True"
	I0229 18:20:25.841575   30631 pod_ready.go:81] duration metric: took 4.788802ms waiting for pod "kube-controller-manager-multinode-051105" in "kube-system" namespace to be "Ready" ...
	I0229 18:20:25.841582   30631 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cbl8s" in "kube-system" namespace to be "Ready" ...
	I0229 18:20:26.010938   30631 request.go:629] Waited for 169.310482ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cbl8s
	I0229 18:20:26.010987   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cbl8s
	I0229 18:20:26.010998   30631 round_trippers.go:469] Request Headers:
	I0229 18:20:26.011016   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:20:26.011026   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:20:26.013701   30631 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:20:26.013727   30631 round_trippers.go:577] Response Headers:
	I0229 18:20:26.013737   30631 round_trippers.go:580]     Audit-Id: bcc23f43-b105-4625-b666-a4611ba901ce
	I0229 18:20:26.013743   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:20:26.013749   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:20:26.013754   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:20:26.013757   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:20:26.013761   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:20:25 GMT
	I0229 18:20:26.014167   30631 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-cbl8s","generateName":"kube-proxy-","namespace":"kube-system","uid":"352ba5ff-0a79-4766-8a3f-a0860aad1b91","resourceVersion":"1118","creationTimestamp":"2024-02-29T18:09:08Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"811deb55-d749-4c76-9949-4d9e40cf5500","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:09:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"811deb55-d749-4c76-9949-4d9e40cf5500\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5887 chars]
	I0229 18:20:26.210956   30631 request.go:629] Waited for 196.375823ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/nodes/multinode-051105-m02
	I0229 18:20:26.211009   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/multinode-051105-m02
	I0229 18:20:26.211016   30631 round_trippers.go:469] Request Headers:
	I0229 18:20:26.211036   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:20:26.211047   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:20:26.215478   30631 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 18:20:26.215513   30631 round_trippers.go:577] Response Headers:
	I0229 18:20:26.215520   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:20:26.215523   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:20:26.215528   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:20:26.215533   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:20:26.215537   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:20:26 GMT
	I0229 18:20:26.215542   30631 round_trippers.go:580]     Audit-Id: 7eb51ac0-7fe0-4f1a-b706-7f79d019f1f8
	I0229 18:20:26.216520   30631 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-051105-m02","uid":"ccce9f48-c73f-4045-b0aa-ccc8f0ee366c","resourceVersion":"1114","creationTimestamp":"2024-02-29T18:20:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-051105-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-051105","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T18_20_25_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:20:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3994 chars]
	I0229 18:20:26.410979   30631 request.go:629] Waited for 69.213379ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cbl8s
	I0229 18:20:26.411043   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cbl8s
	I0229 18:20:26.411049   30631 round_trippers.go:469] Request Headers:
	I0229 18:20:26.411056   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:20:26.411061   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:20:26.413681   30631 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:20:26.413701   30631 round_trippers.go:577] Response Headers:
	I0229 18:20:26.413710   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:20:26.413717   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:20:26.413720   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:20:26.413726   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:20:26.413730   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:20:26 GMT
	I0229 18:20:26.413733   30631 round_trippers.go:580]     Audit-Id: b2223c29-9bfd-48c4-92f2-0ae9625f430c
	I0229 18:20:26.414007   30631 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-cbl8s","generateName":"kube-proxy-","namespace":"kube-system","uid":"352ba5ff-0a79-4766-8a3f-a0860aad1b91","resourceVersion":"1118","creationTimestamp":"2024-02-29T18:09:08Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"811deb55-d749-4c76-9949-4d9e40cf5500","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:09:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"811deb55-d749-4c76-9949-4d9e40cf5500\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5887 chars]
	I0229 18:20:26.610850   30631 request.go:629] Waited for 196.344391ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/nodes/multinode-051105-m02
	I0229 18:20:26.610921   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/multinode-051105-m02
	I0229 18:20:26.610928   30631 round_trippers.go:469] Request Headers:
	I0229 18:20:26.610939   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:20:26.610956   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:20:26.614004   30631 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 18:20:26.614026   30631 round_trippers.go:577] Response Headers:
	I0229 18:20:26.614032   30631 round_trippers.go:580]     Audit-Id: 37e57260-cd8d-4d95-8c0e-23945c736c60
	I0229 18:20:26.614036   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:20:26.614039   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:20:26.614042   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:20:26.614045   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:20:26.614049   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:20:26 GMT
	I0229 18:20:26.614372   30631 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-051105-m02","uid":"ccce9f48-c73f-4045-b0aa-ccc8f0ee366c","resourceVersion":"1114","creationTimestamp":"2024-02-29T18:20:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-051105-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-051105","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T18_20_25_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:20:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3994 chars]
	I0229 18:20:26.841736   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cbl8s
	I0229 18:20:26.841759   30631 round_trippers.go:469] Request Headers:
	I0229 18:20:26.841767   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:20:26.841772   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:20:26.844730   30631 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:20:26.844755   30631 round_trippers.go:577] Response Headers:
	I0229 18:20:26.844763   30631 round_trippers.go:580]     Audit-Id: 0567c0a6-d4d0-404d-bc4b-2c5adadc0379
	I0229 18:20:26.844768   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:20:26.844772   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:20:26.844775   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:20:26.844777   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:20:26.844781   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:20:26 GMT
	I0229 18:20:26.844987   30631 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-cbl8s","generateName":"kube-proxy-","namespace":"kube-system","uid":"352ba5ff-0a79-4766-8a3f-a0860aad1b91","resourceVersion":"1132","creationTimestamp":"2024-02-29T18:09:08Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"811deb55-d749-4c76-9949-4d9e40cf5500","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:09:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"811deb55-d749-4c76-9949-4d9e40cf5500\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5697 chars]
	I0229 18:20:27.010773   30631 request.go:629] Waited for 165.352345ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/nodes/multinode-051105-m02
	I0229 18:20:27.010839   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/multinode-051105-m02
	I0229 18:20:27.010847   30631 round_trippers.go:469] Request Headers:
	I0229 18:20:27.010857   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:20:27.010865   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:20:27.013573   30631 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:20:27.013595   30631 round_trippers.go:577] Response Headers:
	I0229 18:20:27.013601   30631 round_trippers.go:580]     Audit-Id: 2e6d0931-eee0-4dd7-9390-19378a5e442f
	I0229 18:20:27.013605   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:20:27.013607   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:20:27.013609   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:20:27.013612   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:20:27.013615   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:20:26 GMT
	I0229 18:20:27.013852   30631 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-051105-m02","uid":"ccce9f48-c73f-4045-b0aa-ccc8f0ee366c","resourceVersion":"1114","creationTimestamp":"2024-02-29T18:20:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-051105-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-051105","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T18_20_25_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:20:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3994 chars]
	I0229 18:20:27.014111   30631 pod_ready.go:92] pod "kube-proxy-cbl8s" in "kube-system" namespace has status "Ready":"True"
	I0229 18:20:27.014128   30631 pod_ready.go:81] duration metric: took 1.172538993s waiting for pod "kube-proxy-cbl8s" in "kube-system" namespace to be "Ready" ...
	I0229 18:20:27.014136   30631 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jfw9f" in "kube-system" namespace to be "Ready" ...
	I0229 18:20:27.210642   30631 request.go:629] Waited for 196.419218ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jfw9f
	I0229 18:20:27.210691   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jfw9f
	I0229 18:20:27.210701   30631 round_trippers.go:469] Request Headers:
	I0229 18:20:27.210710   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:20:27.210727   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:20:27.214299   30631 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 18:20:27.214320   30631 round_trippers.go:577] Response Headers:
	I0229 18:20:27.214327   30631 round_trippers.go:580]     Audit-Id: 9b54b7b8-3f41-4efc-8b74-2bbddd8d09db
	I0229 18:20:27.214332   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:20:27.214335   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:20:27.214337   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:20:27.214340   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:20:27.214342   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:20:27 GMT
	I0229 18:20:27.214564   30631 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-jfw9f","generateName":"kube-proxy-","namespace":"kube-system","uid":"45e1b79c-2d6b-4169-a6f0-a3949ec4bc6f","resourceVersion":"780","creationTimestamp":"2024-02-29T18:09:56Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"811deb55-d749-4c76-9949-4d9e40cf5500","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:09:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"811deb55-d749-4c76-9949-4d9e40cf5500\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5488 chars]
	I0229 18:20:27.411008   30631 request.go:629] Waited for 195.90265ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/nodes/multinode-051105-m03
	I0229 18:20:27.411082   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/multinode-051105-m03
	I0229 18:20:27.411088   30631 round_trippers.go:469] Request Headers:
	I0229 18:20:27.411096   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:20:27.411099   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:20:27.413963   30631 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:20:27.413980   30631 round_trippers.go:577] Response Headers:
	I0229 18:20:27.413986   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:20:27 GMT
	I0229 18:20:27.413991   30631 round_trippers.go:580]     Audit-Id: 70d9831b-f8ed-4ae3-9447-fa9721a8b953
	I0229 18:20:27.413995   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:20:27.414001   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:20:27.414007   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:20:27.414012   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:20:27.414314   30631 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-051105-m03","uid":"2aa133ce-8b37-4464-acdc-adffba00e813","resourceVersion":"1115","creationTimestamp":"2024-02-29T18:10:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-051105-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-051105","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T18_20_25_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:10:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annota
tions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detac [truncated 3965 chars]
	I0229 18:20:27.414600   30631 pod_ready.go:92] pod "kube-proxy-jfw9f" in "kube-system" namespace has status "Ready":"True"
	I0229 18:20:27.414616   30631 pod_ready.go:81] duration metric: took 400.474718ms waiting for pod "kube-proxy-jfw9f" in "kube-system" namespace to be "Ready" ...
	I0229 18:20:27.414624   30631 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wvhlx" in "kube-system" namespace to be "Ready" ...
	I0229 18:20:27.610852   30631 request.go:629] Waited for 196.158846ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wvhlx
	I0229 18:20:27.610903   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wvhlx
	I0229 18:20:27.610908   30631 round_trippers.go:469] Request Headers:
	I0229 18:20:27.610915   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:20:27.610919   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:20:27.613693   30631 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:20:27.613722   30631 round_trippers.go:577] Response Headers:
	I0229 18:20:27.613731   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:20:27 GMT
	I0229 18:20:27.613740   30631 round_trippers.go:580]     Audit-Id: 5ba1b2bc-fc3b-4f29-b03a-c7fb861fe4bb
	I0229 18:20:27.613744   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:20:27.613748   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:20:27.613752   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:20:27.613755   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:20:27.614230   30631 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-wvhlx","generateName":"kube-proxy-","namespace":"kube-system","uid":"5548dfdd-2cda-48bc-9359-95eda53437d4","resourceVersion":"814","creationTimestamp":"2024-02-29T18:07:14Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"811deb55-d749-4c76-9949-4d9e40cf5500","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:07:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"811deb55-d749-4c76-9949-4d9e40cf5500\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5484 chars]
	I0229 18:20:27.810968   30631 request.go:629] Waited for 196.383664ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/nodes/multinode-051105
	I0229 18:20:27.811050   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/multinode-051105
	I0229 18:20:27.811074   30631 round_trippers.go:469] Request Headers:
	I0229 18:20:27.811087   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:20:27.811093   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:20:27.813717   30631 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:20:27.813739   30631 round_trippers.go:577] Response Headers:
	I0229 18:20:27.813748   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:20:27.813754   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:20:27 GMT
	I0229 18:20:27.813759   30631 round_trippers.go:580]     Audit-Id: 20ddc310-669f-4a36-8b37-5a91f13986cf
	I0229 18:20:27.813763   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:20:27.813767   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:20:27.813771   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:20:27.813928   30631 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-051105","uid":"614122aa-9203-4f41-a34b-07331562af09","resourceVersion":"975","creationTimestamp":"2024-02-29T18:06:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-051105","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-051105","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_07_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T18:06:59Z","fieldsType":"FieldsV1","fiel [truncated 6496 chars]
	I0229 18:20:27.814235   30631 pod_ready.go:92] pod "kube-proxy-wvhlx" in "kube-system" namespace has status "Ready":"True"
	I0229 18:20:27.814251   30631 pod_ready.go:81] duration metric: took 399.621263ms waiting for pod "kube-proxy-wvhlx" in "kube-system" namespace to be "Ready" ...
	I0229 18:20:27.814264   30631 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-051105" in "kube-system" namespace to be "Ready" ...
	I0229 18:20:28.010303   30631 request.go:629] Waited for 195.979723ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-051105
	I0229 18:20:28.010380   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-051105
	I0229 18:20:28.010387   30631 round_trippers.go:469] Request Headers:
	I0229 18:20:28.010395   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:20:28.010399   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:20:28.013413   30631 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 18:20:28.013439   30631 round_trippers.go:577] Response Headers:
	I0229 18:20:28.013450   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:20:28.013455   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:20:28.013460   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:20:27 GMT
	I0229 18:20:28.013464   30631 round_trippers.go:580]     Audit-Id: d5e8a900-6a3d-4a15-b956-3d5a0341cabc
	I0229 18:20:28.013467   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:20:28.013474   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:20:28.014142   30631 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-051105","namespace":"kube-system","uid":"de579522-4a2a-4a66-86f0-8fd37603bb85","resourceVersion":"949","creationTimestamp":"2024-02-29T18:07:00Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"16c1e8bd6ccedfe92575733385fa4d81","kubernetes.io/config.mirror":"16c1e8bd6ccedfe92575733385fa4d81","kubernetes.io/config.seen":"2024-02-29T18:06:55.285517129Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-051105","uid":"614122aa-9203-4f41-a34b-07331562af09","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:07:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4646 chars]
	I0229 18:20:28.210820   30631 request.go:629] Waited for 196.339913ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/nodes/multinode-051105
	I0229 18:20:28.210881   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/multinode-051105
	I0229 18:20:28.210889   30631 round_trippers.go:469] Request Headers:
	I0229 18:20:28.210900   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:20:28.210906   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:20:28.214274   30631 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 18:20:28.214298   30631 round_trippers.go:577] Response Headers:
	I0229 18:20:28.214308   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:20:28.214315   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:20:28 GMT
	I0229 18:20:28.214320   30631 round_trippers.go:580]     Audit-Id: 3572763a-6268-4bd1-8ac0-e4f29452c70e
	I0229 18:20:28.214325   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:20:28.214329   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:20:28.214333   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:20:28.214628   30631 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-051105","uid":"614122aa-9203-4f41-a34b-07331562af09","resourceVersion":"975","creationTimestamp":"2024-02-29T18:06:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-051105","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-051105","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_07_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T18:06:59Z","fieldsType":"FieldsV1","fiel [truncated 6496 chars]
	I0229 18:20:28.214973   30631 pod_ready.go:92] pod "kube-scheduler-multinode-051105" in "kube-system" namespace has status "Ready":"True"
	I0229 18:20:28.214989   30631 pod_ready.go:81] duration metric: took 400.717609ms waiting for pod "kube-scheduler-multinode-051105" in "kube-system" namespace to be "Ready" ...
	I0229 18:20:28.215000   30631 pod_ready.go:38] duration metric: took 2.401368472s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 18:20:28.215038   30631 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 18:20:28.215084   30631 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 18:20:28.231096   30631 system_svc.go:56] duration metric: took 16.069934ms WaitForService to wait for kubelet.
	I0229 18:20:28.231128   30631 kubeadm.go:581] duration metric: took 2.442254304s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 18:20:28.231151   30631 node_conditions.go:102] verifying NodePressure condition ...
	I0229 18:20:28.411170   30631 request.go:629] Waited for 179.954877ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/nodes
	I0229 18:20:28.411263   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes
	I0229 18:20:28.411276   30631 round_trippers.go:469] Request Headers:
	I0229 18:20:28.411288   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:20:28.411295   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:20:28.414634   30631 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 18:20:28.414657   30631 round_trippers.go:577] Response Headers:
	I0229 18:20:28.414664   30631 round_trippers.go:580]     Audit-Id: 3b4c4930-1de8-49f8-aae5-233ace767c5d
	I0229 18:20:28.414668   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:20:28.414672   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:20:28.414674   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:20:28.414678   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:20:28.414681   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:20:28 GMT
	I0229 18:20:28.415524   30631 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1136"},"items":[{"metadata":{"name":"multinode-051105","uid":"614122aa-9203-4f41-a34b-07331562af09","resourceVersion":"975","creationTimestamp":"2024-02-29T18:06:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-051105","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-051105","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_07_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedField
s":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time": [truncated 16493 chars]
	I0229 18:20:28.416098   30631 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 18:20:28.416115   30631 node_conditions.go:123] node cpu capacity is 2
	I0229 18:20:28.416128   30631 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 18:20:28.416133   30631 node_conditions.go:123] node cpu capacity is 2
	I0229 18:20:28.416138   30631 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 18:20:28.416145   30631 node_conditions.go:123] node cpu capacity is 2
	I0229 18:20:28.416153   30631 node_conditions.go:105] duration metric: took 184.995947ms to run NodePressure ...
	I0229 18:20:28.416168   30631 start.go:228] waiting for startup goroutines ...
	I0229 18:20:28.416199   30631 start.go:242] writing updated cluster config ...
	I0229 18:20:28.416610   30631 config.go:182] Loaded profile config "multinode-051105": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 18:20:28.416716   30631 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/multinode-051105/config.json ...
	I0229 18:20:28.419193   30631 out.go:177] * Starting worker node multinode-051105-m03 in cluster multinode-051105
	I0229 18:20:28.420886   30631 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0229 18:20:28.420914   30631 cache.go:56] Caching tarball of preloaded images
	I0229 18:20:28.421030   30631 preload.go:174] Found /home/jenkins/minikube-integration/18259-6428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0229 18:20:28.421046   30631 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0229 18:20:28.421164   30631 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/multinode-051105/config.json ...
	I0229 18:20:28.421392   30631 start.go:365] acquiring machines lock for multinode-051105-m03: {Name:mk08b242c6b6b7cd4a6843a7d3821a8b435540dd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 18:20:28.421436   30631 start.go:369] acquired machines lock for "multinode-051105-m03" in 24.984µs
	I0229 18:20:28.421451   30631 start.go:96] Skipping create...Using existing machine configuration
	I0229 18:20:28.421456   30631 fix.go:54] fixHost starting: m03
	I0229 18:20:28.421704   30631 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 18:20:28.421733   30631 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:20:28.435940   30631 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33105
	I0229 18:20:28.436385   30631 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:20:28.436857   30631 main.go:141] libmachine: Using API Version  1
	I0229 18:20:28.436876   30631 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:20:28.437165   30631 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:20:28.437348   30631 main.go:141] libmachine: (multinode-051105-m03) Calling .DriverName
	I0229 18:20:28.437491   30631 main.go:141] libmachine: (multinode-051105-m03) Calling .GetState
	I0229 18:20:28.439005   30631 fix.go:102] recreateIfNeeded on multinode-051105-m03: state=Running err=<nil>
	W0229 18:20:28.439035   30631 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 18:20:28.441035   30631 out.go:177] * Updating the running kvm2 "multinode-051105-m03" VM ...
	I0229 18:20:28.442278   30631 machine.go:88] provisioning docker machine ...
	I0229 18:20:28.442297   30631 main.go:141] libmachine: (multinode-051105-m03) Calling .DriverName
	I0229 18:20:28.442502   30631 main.go:141] libmachine: (multinode-051105-m03) Calling .GetMachineName
	I0229 18:20:28.442663   30631 buildroot.go:166] provisioning hostname "multinode-051105-m03"
	I0229 18:20:28.442680   30631 main.go:141] libmachine: (multinode-051105-m03) Calling .GetMachineName
	I0229 18:20:28.442811   30631 main.go:141] libmachine: (multinode-051105-m03) Calling .GetSSHHostname
	I0229 18:20:28.445238   30631 main.go:141] libmachine: (multinode-051105-m03) DBG | domain multinode-051105-m03 has defined MAC address 52:54:00:9a:81:51 in network mk-multinode-051105
	I0229 18:20:28.445629   30631 main.go:141] libmachine: (multinode-051105-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:81:51", ip: ""} in network mk-multinode-051105: {Iface:virbr1 ExpiryTime:2024-02-29 19:10:31 +0000 UTC Type:0 Mac:52:54:00:9a:81:51 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-051105-m03 Clientid:01:52:54:00:9a:81:51}
	I0229 18:20:28.445655   30631 main.go:141] libmachine: (multinode-051105-m03) DBG | domain multinode-051105-m03 has defined IP address 192.168.39.78 and MAC address 52:54:00:9a:81:51 in network mk-multinode-051105
	I0229 18:20:28.445795   30631 main.go:141] libmachine: (multinode-051105-m03) Calling .GetSSHPort
	I0229 18:20:28.445949   30631 main.go:141] libmachine: (multinode-051105-m03) Calling .GetSSHKeyPath
	I0229 18:20:28.446110   30631 main.go:141] libmachine: (multinode-051105-m03) Calling .GetSSHKeyPath
	I0229 18:20:28.446228   30631 main.go:141] libmachine: (multinode-051105-m03) Calling .GetSSHUsername
	I0229 18:20:28.446398   30631 main.go:141] libmachine: Using SSH client type: native
	I0229 18:20:28.446547   30631 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0229 18:20:28.446559   30631 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-051105-m03 && echo "multinode-051105-m03" | sudo tee /etc/hostname
	I0229 18:20:28.571787   30631 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-051105-m03
	
	I0229 18:20:28.571816   30631 main.go:141] libmachine: (multinode-051105-m03) Calling .GetSSHHostname
	I0229 18:20:28.574487   30631 main.go:141] libmachine: (multinode-051105-m03) DBG | domain multinode-051105-m03 has defined MAC address 52:54:00:9a:81:51 in network mk-multinode-051105
	I0229 18:20:28.574825   30631 main.go:141] libmachine: (multinode-051105-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:81:51", ip: ""} in network mk-multinode-051105: {Iface:virbr1 ExpiryTime:2024-02-29 19:10:31 +0000 UTC Type:0 Mac:52:54:00:9a:81:51 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-051105-m03 Clientid:01:52:54:00:9a:81:51}
	I0229 18:20:28.574852   30631 main.go:141] libmachine: (multinode-051105-m03) DBG | domain multinode-051105-m03 has defined IP address 192.168.39.78 and MAC address 52:54:00:9a:81:51 in network mk-multinode-051105
	I0229 18:20:28.574991   30631 main.go:141] libmachine: (multinode-051105-m03) Calling .GetSSHPort
	I0229 18:20:28.575216   30631 main.go:141] libmachine: (multinode-051105-m03) Calling .GetSSHKeyPath
	I0229 18:20:28.575390   30631 main.go:141] libmachine: (multinode-051105-m03) Calling .GetSSHKeyPath
	I0229 18:20:28.575596   30631 main.go:141] libmachine: (multinode-051105-m03) Calling .GetSSHUsername
	I0229 18:20:28.575772   30631 main.go:141] libmachine: Using SSH client type: native
	I0229 18:20:28.575944   30631 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0229 18:20:28.575966   30631 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-051105-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-051105-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-051105-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 18:20:28.680615   30631 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 18:20:28.680641   30631 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18259-6428/.minikube CaCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18259-6428/.minikube}
	I0229 18:20:28.680660   30631 buildroot.go:174] setting up certificates
	I0229 18:20:28.680670   30631 provision.go:83] configureAuth start
	I0229 18:20:28.680680   30631 main.go:141] libmachine: (multinode-051105-m03) Calling .GetMachineName
	I0229 18:20:28.680957   30631 main.go:141] libmachine: (multinode-051105-m03) Calling .GetIP
	I0229 18:20:28.683615   30631 main.go:141] libmachine: (multinode-051105-m03) DBG | domain multinode-051105-m03 has defined MAC address 52:54:00:9a:81:51 in network mk-multinode-051105
	I0229 18:20:28.684005   30631 main.go:141] libmachine: (multinode-051105-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:81:51", ip: ""} in network mk-multinode-051105: {Iface:virbr1 ExpiryTime:2024-02-29 19:10:31 +0000 UTC Type:0 Mac:52:54:00:9a:81:51 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-051105-m03 Clientid:01:52:54:00:9a:81:51}
	I0229 18:20:28.684032   30631 main.go:141] libmachine: (multinode-051105-m03) DBG | domain multinode-051105-m03 has defined IP address 192.168.39.78 and MAC address 52:54:00:9a:81:51 in network mk-multinode-051105
	I0229 18:20:28.684198   30631 main.go:141] libmachine: (multinode-051105-m03) Calling .GetSSHHostname
	I0229 18:20:28.686537   30631 main.go:141] libmachine: (multinode-051105-m03) DBG | domain multinode-051105-m03 has defined MAC address 52:54:00:9a:81:51 in network mk-multinode-051105
	I0229 18:20:28.686828   30631 main.go:141] libmachine: (multinode-051105-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:81:51", ip: ""} in network mk-multinode-051105: {Iface:virbr1 ExpiryTime:2024-02-29 19:10:31 +0000 UTC Type:0 Mac:52:54:00:9a:81:51 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-051105-m03 Clientid:01:52:54:00:9a:81:51}
	I0229 18:20:28.686846   30631 main.go:141] libmachine: (multinode-051105-m03) DBG | domain multinode-051105-m03 has defined IP address 192.168.39.78 and MAC address 52:54:00:9a:81:51 in network mk-multinode-051105
	I0229 18:20:28.687000   30631 provision.go:138] copyHostCerts
	I0229 18:20:28.687045   30631 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem
	I0229 18:20:28.687085   30631 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem, removing ...
	I0229 18:20:28.687095   30631 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem
	I0229 18:20:28.687180   30631 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem (1082 bytes)
	I0229 18:20:28.687265   30631 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem
	I0229 18:20:28.687290   30631 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem, removing ...
	I0229 18:20:28.687296   30631 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem
	I0229 18:20:28.687333   30631 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem (1123 bytes)
	I0229 18:20:28.687397   30631 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem
	I0229 18:20:28.687420   30631 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem, removing ...
	I0229 18:20:28.687437   30631 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem
	I0229 18:20:28.687470   30631 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem (1675 bytes)
	I0229 18:20:28.687531   30631 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem org=jenkins.multinode-051105-m03 san=[192.168.39.78 192.168.39.78 localhost 127.0.0.1 minikube multinode-051105-m03]
	I0229 18:20:28.963948   30631 provision.go:172] copyRemoteCerts
	I0229 18:20:28.964003   30631 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 18:20:28.964030   30631 main.go:141] libmachine: (multinode-051105-m03) Calling .GetSSHHostname
	I0229 18:20:28.966861   30631 main.go:141] libmachine: (multinode-051105-m03) DBG | domain multinode-051105-m03 has defined MAC address 52:54:00:9a:81:51 in network mk-multinode-051105
	I0229 18:20:28.967332   30631 main.go:141] libmachine: (multinode-051105-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:81:51", ip: ""} in network mk-multinode-051105: {Iface:virbr1 ExpiryTime:2024-02-29 19:10:31 +0000 UTC Type:0 Mac:52:54:00:9a:81:51 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-051105-m03 Clientid:01:52:54:00:9a:81:51}
	I0229 18:20:28.967351   30631 main.go:141] libmachine: (multinode-051105-m03) DBG | domain multinode-051105-m03 has defined IP address 192.168.39.78 and MAC address 52:54:00:9a:81:51 in network mk-multinode-051105
	I0229 18:20:28.967572   30631 main.go:141] libmachine: (multinode-051105-m03) Calling .GetSSHPort
	I0229 18:20:28.967744   30631 main.go:141] libmachine: (multinode-051105-m03) Calling .GetSSHKeyPath
	I0229 18:20:28.967865   30631 main.go:141] libmachine: (multinode-051105-m03) Calling .GetSSHUsername
	I0229 18:20:28.967973   30631 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/multinode-051105-m03/id_rsa Username:docker}
	I0229 18:20:29.050530   30631 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0229 18:20:29.050611   30631 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0229 18:20:29.079440   30631 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0229 18:20:29.079503   30631 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 18:20:29.106883   30631 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0229 18:20:29.106967   30631 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 18:20:29.134355   30631 provision.go:86] duration metric: configureAuth took 453.675094ms
	I0229 18:20:29.134378   30631 buildroot.go:189] setting minikube options for container-runtime
	I0229 18:20:29.134556   30631 config.go:182] Loaded profile config "multinode-051105": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 18:20:29.134616   30631 main.go:141] libmachine: (multinode-051105-m03) Calling .GetSSHHostname
	I0229 18:20:29.137297   30631 main.go:141] libmachine: (multinode-051105-m03) DBG | domain multinode-051105-m03 has defined MAC address 52:54:00:9a:81:51 in network mk-multinode-051105
	I0229 18:20:29.137608   30631 main.go:141] libmachine: (multinode-051105-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:81:51", ip: ""} in network mk-multinode-051105: {Iface:virbr1 ExpiryTime:2024-02-29 19:10:31 +0000 UTC Type:0 Mac:52:54:00:9a:81:51 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-051105-m03 Clientid:01:52:54:00:9a:81:51}
	I0229 18:20:29.137629   30631 main.go:141] libmachine: (multinode-051105-m03) DBG | domain multinode-051105-m03 has defined IP address 192.168.39.78 and MAC address 52:54:00:9a:81:51 in network mk-multinode-051105
	I0229 18:20:29.137850   30631 main.go:141] libmachine: (multinode-051105-m03) Calling .GetSSHPort
	I0229 18:20:29.138082   30631 main.go:141] libmachine: (multinode-051105-m03) Calling .GetSSHKeyPath
	I0229 18:20:29.138287   30631 main.go:141] libmachine: (multinode-051105-m03) Calling .GetSSHKeyPath
	I0229 18:20:29.138426   30631 main.go:141] libmachine: (multinode-051105-m03) Calling .GetSSHUsername
	I0229 18:20:29.138586   30631 main.go:141] libmachine: Using SSH client type: native
	I0229 18:20:29.138774   30631 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0229 18:20:29.138792   30631 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0229 18:21:59.546012   30631 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0229 18:21:59.546066   30631 machine.go:91] provisioned docker machine in 1m31.103775014s
	I0229 18:21:59.546083   30631 start.go:300] post-start starting for "multinode-051105-m03" (driver="kvm2")
	I0229 18:21:59.546101   30631 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 18:21:59.546129   30631 main.go:141] libmachine: (multinode-051105-m03) Calling .DriverName
	I0229 18:21:59.546470   30631 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 18:21:59.546499   30631 main.go:141] libmachine: (multinode-051105-m03) Calling .GetSSHHostname
	I0229 18:21:59.549474   30631 main.go:141] libmachine: (multinode-051105-m03) DBG | domain multinode-051105-m03 has defined MAC address 52:54:00:9a:81:51 in network mk-multinode-051105
	I0229 18:21:59.549839   30631 main.go:141] libmachine: (multinode-051105-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:81:51", ip: ""} in network mk-multinode-051105: {Iface:virbr1 ExpiryTime:2024-02-29 19:10:31 +0000 UTC Type:0 Mac:52:54:00:9a:81:51 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-051105-m03 Clientid:01:52:54:00:9a:81:51}
	I0229 18:21:59.549867   30631 main.go:141] libmachine: (multinode-051105-m03) DBG | domain multinode-051105-m03 has defined IP address 192.168.39.78 and MAC address 52:54:00:9a:81:51 in network mk-multinode-051105
	I0229 18:21:59.550026   30631 main.go:141] libmachine: (multinode-051105-m03) Calling .GetSSHPort
	I0229 18:21:59.550243   30631 main.go:141] libmachine: (multinode-051105-m03) Calling .GetSSHKeyPath
	I0229 18:21:59.550420   30631 main.go:141] libmachine: (multinode-051105-m03) Calling .GetSSHUsername
	I0229 18:21:59.550562   30631 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/multinode-051105-m03/id_rsa Username:docker}
	I0229 18:21:59.636211   30631 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 18:21:59.641662   30631 command_runner.go:130] > NAME=Buildroot
	I0229 18:21:59.641682   30631 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0229 18:21:59.641686   30631 command_runner.go:130] > ID=buildroot
	I0229 18:21:59.641693   30631 command_runner.go:130] > VERSION_ID=2023.02.9
	I0229 18:21:59.641700   30631 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0229 18:21:59.641758   30631 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 18:21:59.641775   30631 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6428/.minikube/addons for local assets ...
	I0229 18:21:59.641864   30631 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6428/.minikube/files for local assets ...
	I0229 18:21:59.641941   30631 filesync.go:149] local asset: /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem -> 136512.pem in /etc/ssl/certs
	I0229 18:21:59.641950   30631 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem -> /etc/ssl/certs/136512.pem
	I0229 18:21:59.642031   30631 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 18:21:59.654028   30631 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem --> /etc/ssl/certs/136512.pem (1708 bytes)
	I0229 18:21:59.683757   30631 start.go:303] post-start completed in 137.651178ms
	I0229 18:21:59.683782   30631 fix.go:56] fixHost completed within 1m31.262326402s
	I0229 18:21:59.683803   30631 main.go:141] libmachine: (multinode-051105-m03) Calling .GetSSHHostname
	I0229 18:21:59.686470   30631 main.go:141] libmachine: (multinode-051105-m03) DBG | domain multinode-051105-m03 has defined MAC address 52:54:00:9a:81:51 in network mk-multinode-051105
	I0229 18:21:59.686815   30631 main.go:141] libmachine: (multinode-051105-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:81:51", ip: ""} in network mk-multinode-051105: {Iface:virbr1 ExpiryTime:2024-02-29 19:10:31 +0000 UTC Type:0 Mac:52:54:00:9a:81:51 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-051105-m03 Clientid:01:52:54:00:9a:81:51}
	I0229 18:21:59.686847   30631 main.go:141] libmachine: (multinode-051105-m03) DBG | domain multinode-051105-m03 has defined IP address 192.168.39.78 and MAC address 52:54:00:9a:81:51 in network mk-multinode-051105
	I0229 18:21:59.686999   30631 main.go:141] libmachine: (multinode-051105-m03) Calling .GetSSHPort
	I0229 18:21:59.687215   30631 main.go:141] libmachine: (multinode-051105-m03) Calling .GetSSHKeyPath
	I0229 18:21:59.687373   30631 main.go:141] libmachine: (multinode-051105-m03) Calling .GetSSHKeyPath
	I0229 18:21:59.687518   30631 main.go:141] libmachine: (multinode-051105-m03) Calling .GetSSHUsername
	I0229 18:21:59.687692   30631 main.go:141] libmachine: Using SSH client type: native
	I0229 18:21:59.687873   30631 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0229 18:21:59.687884   30631 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 18:21:59.792326   30631 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709230919.783819962
	
	I0229 18:21:59.792348   30631 fix.go:206] guest clock: 1709230919.783819962
	I0229 18:21:59.792357   30631 fix.go:219] Guest: 2024-02-29 18:21:59.783819962 +0000 UTC Remote: 2024-02-29 18:21:59.683786341 +0000 UTC m=+551.605039712 (delta=100.033621ms)
	I0229 18:21:59.792377   30631 fix.go:190] guest clock delta is within tolerance: 100.033621ms
	I0229 18:21:59.792384   30631 start.go:83] releasing machines lock for "multinode-051105-m03", held for 1m31.370936863s
	I0229 18:21:59.792406   30631 main.go:141] libmachine: (multinode-051105-m03) Calling .DriverName
	I0229 18:21:59.792654   30631 main.go:141] libmachine: (multinode-051105-m03) Calling .GetIP
	I0229 18:21:59.795247   30631 main.go:141] libmachine: (multinode-051105-m03) DBG | domain multinode-051105-m03 has defined MAC address 52:54:00:9a:81:51 in network mk-multinode-051105
	I0229 18:21:59.795688   30631 main.go:141] libmachine: (multinode-051105-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:81:51", ip: ""} in network mk-multinode-051105: {Iface:virbr1 ExpiryTime:2024-02-29 19:10:31 +0000 UTC Type:0 Mac:52:54:00:9a:81:51 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-051105-m03 Clientid:01:52:54:00:9a:81:51}
	I0229 18:21:59.795715   30631 main.go:141] libmachine: (multinode-051105-m03) DBG | domain multinode-051105-m03 has defined IP address 192.168.39.78 and MAC address 52:54:00:9a:81:51 in network mk-multinode-051105
	I0229 18:21:59.797671   30631 out.go:177] * Found network options:
	I0229 18:21:59.799308   30631 out.go:177]   - NO_PROXY=192.168.39.200,192.168.39.104
	W0229 18:21:59.800747   30631 proxy.go:119] fail to check proxy env: Error ip not in block
	W0229 18:21:59.800767   30631 proxy.go:119] fail to check proxy env: Error ip not in block
	I0229 18:21:59.800781   30631 main.go:141] libmachine: (multinode-051105-m03) Calling .DriverName
	I0229 18:21:59.801315   30631 main.go:141] libmachine: (multinode-051105-m03) Calling .DriverName
	I0229 18:21:59.801512   30631 main.go:141] libmachine: (multinode-051105-m03) Calling .DriverName
	I0229 18:21:59.801602   30631 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 18:21:59.801637   30631 main.go:141] libmachine: (multinode-051105-m03) Calling .GetSSHHostname
	W0229 18:21:59.801719   30631 proxy.go:119] fail to check proxy env: Error ip not in block
	W0229 18:21:59.801742   30631 proxy.go:119] fail to check proxy env: Error ip not in block
	I0229 18:21:59.801815   30631 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0229 18:21:59.801836   30631 main.go:141] libmachine: (multinode-051105-m03) Calling .GetSSHHostname
	I0229 18:21:59.804189   30631 main.go:141] libmachine: (multinode-051105-m03) DBG | domain multinode-051105-m03 has defined MAC address 52:54:00:9a:81:51 in network mk-multinode-051105
	I0229 18:21:59.804313   30631 main.go:141] libmachine: (multinode-051105-m03) DBG | domain multinode-051105-m03 has defined MAC address 52:54:00:9a:81:51 in network mk-multinode-051105
	I0229 18:21:59.804582   30631 main.go:141] libmachine: (multinode-051105-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:81:51", ip: ""} in network mk-multinode-051105: {Iface:virbr1 ExpiryTime:2024-02-29 19:10:31 +0000 UTC Type:0 Mac:52:54:00:9a:81:51 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-051105-m03 Clientid:01:52:54:00:9a:81:51}
	I0229 18:21:59.804608   30631 main.go:141] libmachine: (multinode-051105-m03) DBG | domain multinode-051105-m03 has defined IP address 192.168.39.78 and MAC address 52:54:00:9a:81:51 in network mk-multinode-051105
	I0229 18:21:59.804635   30631 main.go:141] libmachine: (multinode-051105-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:81:51", ip: ""} in network mk-multinode-051105: {Iface:virbr1 ExpiryTime:2024-02-29 19:10:31 +0000 UTC Type:0 Mac:52:54:00:9a:81:51 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-051105-m03 Clientid:01:52:54:00:9a:81:51}
	I0229 18:21:59.804651   30631 main.go:141] libmachine: (multinode-051105-m03) DBG | domain multinode-051105-m03 has defined IP address 192.168.39.78 and MAC address 52:54:00:9a:81:51 in network mk-multinode-051105
	I0229 18:21:59.804749   30631 main.go:141] libmachine: (multinode-051105-m03) Calling .GetSSHPort
	I0229 18:21:59.804840   30631 main.go:141] libmachine: (multinode-051105-m03) Calling .GetSSHPort
	I0229 18:21:59.804903   30631 main.go:141] libmachine: (multinode-051105-m03) Calling .GetSSHKeyPath
	I0229 18:21:59.805022   30631 main.go:141] libmachine: (multinode-051105-m03) Calling .GetSSHKeyPath
	I0229 18:21:59.805076   30631 main.go:141] libmachine: (multinode-051105-m03) Calling .GetSSHUsername
	I0229 18:21:59.805159   30631 main.go:141] libmachine: (multinode-051105-m03) Calling .GetSSHUsername
	I0229 18:21:59.805337   30631 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/multinode-051105-m03/id_rsa Username:docker}
	I0229 18:21:59.805338   30631 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/multinode-051105-m03/id_rsa Username:docker}
	I0229 18:22:00.044573   30631 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0229 18:22:00.044595   30631 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0229 18:22:00.051599   30631 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0229 18:22:00.051722   30631 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 18:22:00.051776   30631 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 18:22:00.062841   30631 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0229 18:22:00.062868   30631 start.go:475] detecting cgroup driver to use...
	I0229 18:22:00.062936   30631 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 18:22:00.083993   30631 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 18:22:00.100622   30631 docker.go:217] disabling cri-docker service (if available) ...
	I0229 18:22:00.100682   30631 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 18:22:00.121139   30631 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 18:22:00.137294   30631 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 18:22:00.277612   30631 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 18:22:00.407708   30631 docker.go:233] disabling docker service ...
	I0229 18:22:00.407789   30631 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 18:22:00.425575   30631 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 18:22:00.440297   30631 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 18:22:00.570690   30631 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 18:22:00.700410   30631 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 18:22:00.716865   30631 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 18:22:00.738971   30631 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0229 18:22:00.739666   30631 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0229 18:22:00.739729   30631 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:22:00.752080   30631 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0229 18:22:00.752133   30631 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:22:00.764162   30631 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:22:00.775630   30631 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:22:00.787201   30631 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 18:22:00.799070   30631 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 18:22:00.808723   30631 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0229 18:22:00.808875   30631 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 18:22:00.819418   30631 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:22:00.940975   30631 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0229 18:22:07.190625   30631 ssh_runner.go:235] Completed: sudo systemctl restart crio: (6.249612328s)
	I0229 18:22:07.190680   30631 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0229 18:22:07.190736   30631 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0229 18:22:07.196742   30631 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0229 18:22:07.196764   30631 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0229 18:22:07.196772   30631 command_runner.go:130] > Device: 0,22	Inode: 1151        Links: 1
	I0229 18:22:07.196782   30631 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0229 18:22:07.196788   30631 command_runner.go:130] > Access: 2024-02-29 18:22:07.126560686 +0000
	I0229 18:22:07.196796   30631 command_runner.go:130] > Modify: 2024-02-29 18:22:07.126560686 +0000
	I0229 18:22:07.196804   30631 command_runner.go:130] > Change: 2024-02-29 18:22:07.126560686 +0000
	I0229 18:22:07.196810   30631 command_runner.go:130] >  Birth: -
	I0229 18:22:07.196826   30631 start.go:543] Will wait 60s for crictl version
	I0229 18:22:07.196876   30631 ssh_runner.go:195] Run: which crictl
	I0229 18:22:07.201384   30631 command_runner.go:130] > /usr/bin/crictl
	I0229 18:22:07.201453   30631 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 18:22:07.252621   30631 command_runner.go:130] > Version:  0.1.0
	I0229 18:22:07.252648   30631 command_runner.go:130] > RuntimeName:  cri-o
	I0229 18:22:07.252655   30631 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0229 18:22:07.252663   30631 command_runner.go:130] > RuntimeApiVersion:  v1
	I0229 18:22:07.252784   30631 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0229 18:22:07.252864   30631 ssh_runner.go:195] Run: crio --version
	I0229 18:22:07.283037   30631 command_runner.go:130] > crio version 1.29.1
	I0229 18:22:07.283056   30631 command_runner.go:130] > Version:        1.29.1
	I0229 18:22:07.283062   30631 command_runner.go:130] > GitCommit:      unknown
	I0229 18:22:07.283067   30631 command_runner.go:130] > GitCommitDate:  unknown
	I0229 18:22:07.283070   30631 command_runner.go:130] > GitTreeState:   clean
	I0229 18:22:07.283076   30631 command_runner.go:130] > BuildDate:      2024-02-23T03:27:48Z
	I0229 18:22:07.283080   30631 command_runner.go:130] > GoVersion:      go1.21.6
	I0229 18:22:07.283084   30631 command_runner.go:130] > Compiler:       gc
	I0229 18:22:07.283088   30631 command_runner.go:130] > Platform:       linux/amd64
	I0229 18:22:07.283093   30631 command_runner.go:130] > Linkmode:       dynamic
	I0229 18:22:07.283097   30631 command_runner.go:130] > BuildTags:      
	I0229 18:22:07.283101   30631 command_runner.go:130] >   containers_image_ostree_stub
	I0229 18:22:07.283105   30631 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0229 18:22:07.283124   30631 command_runner.go:130] >   btrfs_noversion
	I0229 18:22:07.283143   30631 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0229 18:22:07.283150   30631 command_runner.go:130] >   libdm_no_deferred_remove
	I0229 18:22:07.283158   30631 command_runner.go:130] >   seccomp
	I0229 18:22:07.283164   30631 command_runner.go:130] > LDFlags:          unknown
	I0229 18:22:07.283174   30631 command_runner.go:130] > SeccompEnabled:   true
	I0229 18:22:07.283180   30631 command_runner.go:130] > AppArmorEnabled:  false
	I0229 18:22:07.284498   30631 ssh_runner.go:195] Run: crio --version
	I0229 18:22:07.319426   30631 command_runner.go:130] > crio version 1.29.1
	I0229 18:22:07.319447   30631 command_runner.go:130] > Version:        1.29.1
	I0229 18:22:07.319456   30631 command_runner.go:130] > GitCommit:      unknown
	I0229 18:22:07.319463   30631 command_runner.go:130] > GitCommitDate:  unknown
	I0229 18:22:07.319470   30631 command_runner.go:130] > GitTreeState:   clean
	I0229 18:22:07.319477   30631 command_runner.go:130] > BuildDate:      2024-02-23T03:27:48Z
	I0229 18:22:07.319482   30631 command_runner.go:130] > GoVersion:      go1.21.6
	I0229 18:22:07.319486   30631 command_runner.go:130] > Compiler:       gc
	I0229 18:22:07.319490   30631 command_runner.go:130] > Platform:       linux/amd64
	I0229 18:22:07.319494   30631 command_runner.go:130] > Linkmode:       dynamic
	I0229 18:22:07.319501   30631 command_runner.go:130] > BuildTags:      
	I0229 18:22:07.319505   30631 command_runner.go:130] >   containers_image_ostree_stub
	I0229 18:22:07.319510   30631 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0229 18:22:07.319514   30631 command_runner.go:130] >   btrfs_noversion
	I0229 18:22:07.319519   30631 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0229 18:22:07.319523   30631 command_runner.go:130] >   libdm_no_deferred_remove
	I0229 18:22:07.319533   30631 command_runner.go:130] >   seccomp
	I0229 18:22:07.319545   30631 command_runner.go:130] > LDFlags:          unknown
	I0229 18:22:07.319556   30631 command_runner.go:130] > SeccompEnabled:   true
	I0229 18:22:07.319563   30631 command_runner.go:130] > AppArmorEnabled:  false
	I0229 18:22:07.321861   30631 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0229 18:22:07.323488   30631 out.go:177]   - env NO_PROXY=192.168.39.200
	I0229 18:22:07.325026   30631 out.go:177]   - env NO_PROXY=192.168.39.200,192.168.39.104
	I0229 18:22:07.326324   30631 main.go:141] libmachine: (multinode-051105-m03) Calling .GetIP
	I0229 18:22:07.328891   30631 main.go:141] libmachine: (multinode-051105-m03) DBG | domain multinode-051105-m03 has defined MAC address 52:54:00:9a:81:51 in network mk-multinode-051105
	I0229 18:22:07.329228   30631 main.go:141] libmachine: (multinode-051105-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:81:51", ip: ""} in network mk-multinode-051105: {Iface:virbr1 ExpiryTime:2024-02-29 19:10:31 +0000 UTC Type:0 Mac:52:54:00:9a:81:51 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-051105-m03 Clientid:01:52:54:00:9a:81:51}
	I0229 18:22:07.329250   30631 main.go:141] libmachine: (multinode-051105-m03) DBG | domain multinode-051105-m03 has defined IP address 192.168.39.78 and MAC address 52:54:00:9a:81:51 in network mk-multinode-051105
	I0229 18:22:07.329515   30631 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0229 18:22:07.334799   30631 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0229 18:22:07.334842   30631 certs.go:56] Setting up /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/multinode-051105 for IP: 192.168.39.78
	I0229 18:22:07.334865   30631 certs.go:190] acquiring lock for shared ca certs: {Name:mk3505d2468da66ac20dc3ebf913782cecf1ba0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:22:07.335046   30631 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18259-6428/.minikube/ca.key
	I0229 18:22:07.335099   30631 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.key
	I0229 18:22:07.335115   30631 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0229 18:22:07.335132   30631 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6428/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0229 18:22:07.335151   30631 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0229 18:22:07.335168   30631 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0229 18:22:07.335236   30631 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651.pem (1338 bytes)
	W0229 18:22:07.335274   30631 certs.go:433] ignoring /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651_empty.pem, impossibly tiny 0 bytes
	I0229 18:22:07.335288   30631 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem (1675 bytes)
	I0229 18:22:07.335319   30631 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem (1082 bytes)
	I0229 18:22:07.335351   30631 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem (1123 bytes)
	I0229 18:22:07.335383   30631 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem (1675 bytes)
	I0229 18:22:07.335442   30631 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem (1708 bytes)
	I0229 18:22:07.335484   30631 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:22:07.335503   30631 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651.pem -> /usr/share/ca-certificates/13651.pem
	I0229 18:22:07.335520   30631 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem -> /usr/share/ca-certificates/136512.pem
	I0229 18:22:07.335844   30631 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 18:22:07.367216   30631 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0229 18:22:07.398885   30631 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 18:22:07.426710   30631 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0229 18:22:07.454721   30631 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 18:22:07.486190   30631 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651.pem --> /usr/share/ca-certificates/13651.pem (1338 bytes)
	I0229 18:22:07.530598   30631 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem --> /usr/share/ca-certificates/136512.pem (1708 bytes)
	I0229 18:22:07.560073   30631 ssh_runner.go:195] Run: openssl version
	I0229 18:22:07.566618   30631 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0229 18:22:07.566851   30631 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 18:22:07.579490   30631 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:22:07.584699   30631 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb 29 17:40 /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:22:07.584734   30631 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 17:40 /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:22:07.584767   30631 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:22:07.591042   30631 command_runner.go:130] > b5213941
	I0229 18:22:07.591317   30631 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 18:22:07.602150   30631 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13651.pem && ln -fs /usr/share/ca-certificates/13651.pem /etc/ssl/certs/13651.pem"
	I0229 18:22:07.615385   30631 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13651.pem
	I0229 18:22:07.620415   30631 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb 29 17:49 /usr/share/ca-certificates/13651.pem
	I0229 18:22:07.620517   30631 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 17:49 /usr/share/ca-certificates/13651.pem
	I0229 18:22:07.620568   30631 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13651.pem
	I0229 18:22:07.626664   30631 command_runner.go:130] > 51391683
	I0229 18:22:07.626886   30631 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13651.pem /etc/ssl/certs/51391683.0"
	I0229 18:22:07.637543   30631 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136512.pem && ln -fs /usr/share/ca-certificates/136512.pem /etc/ssl/certs/136512.pem"
	I0229 18:22:07.649215   30631 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136512.pem
	I0229 18:22:07.654158   30631 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb 29 17:49 /usr/share/ca-certificates/136512.pem
	I0229 18:22:07.654183   30631 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 17:49 /usr/share/ca-certificates/136512.pem
	I0229 18:22:07.654213   30631 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136512.pem
	I0229 18:22:07.660901   30631 command_runner.go:130] > 3ec20f2e
	I0229 18:22:07.660966   30631 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136512.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 18:22:07.671563   30631 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 18:22:07.676258   30631 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 18:22:07.676484   30631 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 18:22:07.676565   30631 ssh_runner.go:195] Run: crio config
	I0229 18:22:07.721542   30631 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0229 18:22:07.721565   30631 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0229 18:22:07.721574   30631 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0229 18:22:07.721578   30631 command_runner.go:130] > #
	I0229 18:22:07.721587   30631 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0229 18:22:07.721597   30631 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0229 18:22:07.721608   30631 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0229 18:22:07.721622   30631 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0229 18:22:07.721627   30631 command_runner.go:130] > # reload'.
	I0229 18:22:07.721637   30631 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0229 18:22:07.721652   30631 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0229 18:22:07.721665   30631 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0229 18:22:07.721679   30631 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0229 18:22:07.721685   30631 command_runner.go:130] > [crio]
	I0229 18:22:07.721695   30631 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0229 18:22:07.721707   30631 command_runner.go:130] > # containers images, in this directory.
	I0229 18:22:07.721717   30631 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0229 18:22:07.721733   30631 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0229 18:22:07.721744   30631 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0229 18:22:07.721752   30631 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0229 18:22:07.721760   30631 command_runner.go:130] > # imagestore = ""
	I0229 18:22:07.721773   30631 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0229 18:22:07.721786   30631 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0229 18:22:07.721797   30631 command_runner.go:130] > storage_driver = "overlay"
	I0229 18:22:07.721808   30631 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0229 18:22:07.721822   30631 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0229 18:22:07.721831   30631 command_runner.go:130] > storage_option = [
	I0229 18:22:07.721842   30631 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0229 18:22:07.721846   30631 command_runner.go:130] > ]
	I0229 18:22:07.721860   30631 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0229 18:22:07.721877   30631 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0229 18:22:07.721886   30631 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0229 18:22:07.721898   30631 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0229 18:22:07.721911   30631 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0229 18:22:07.721921   30631 command_runner.go:130] > # always happen on a node reboot
	I0229 18:22:07.721929   30631 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0229 18:22:07.721943   30631 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0229 18:22:07.721957   30631 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0229 18:22:07.721968   30631 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0229 18:22:07.721979   30631 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0229 18:22:07.721993   30631 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0229 18:22:07.722009   30631 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0229 18:22:07.722017   30631 command_runner.go:130] > # internal_wipe = true
	I0229 18:22:07.722032   30631 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0229 18:22:07.722044   30631 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0229 18:22:07.722054   30631 command_runner.go:130] > # internal_repair = false
	I0229 18:22:07.722066   30631 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0229 18:22:07.722080   30631 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0229 18:22:07.722091   30631 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0229 18:22:07.722101   30631 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0229 18:22:07.722107   30631 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0229 18:22:07.722115   30631 command_runner.go:130] > [crio.api]
	I0229 18:22:07.722126   30631 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0229 18:22:07.722139   30631 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0229 18:22:07.722150   30631 command_runner.go:130] > # IP address on which the stream server will listen.
	I0229 18:22:07.722160   30631 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0229 18:22:07.722170   30631 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0229 18:22:07.722181   30631 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0229 18:22:07.722188   30631 command_runner.go:130] > # stream_port = "0"
	I0229 18:22:07.722193   30631 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0229 18:22:07.722202   30631 command_runner.go:130] > # stream_enable_tls = false
	I0229 18:22:07.722212   30631 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0229 18:22:07.722224   30631 command_runner.go:130] > # stream_idle_timeout = ""
	I0229 18:22:07.722237   30631 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0229 18:22:07.722249   30631 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0229 18:22:07.722256   30631 command_runner.go:130] > # minutes.
	I0229 18:22:07.722263   30631 command_runner.go:130] > # stream_tls_cert = ""
	I0229 18:22:07.722273   30631 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0229 18:22:07.722286   30631 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0229 18:22:07.722297   30631 command_runner.go:130] > # stream_tls_key = ""
	I0229 18:22:07.722309   30631 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0229 18:22:07.722322   30631 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0229 18:22:07.722340   30631 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0229 18:22:07.722350   30631 command_runner.go:130] > # stream_tls_ca = ""
	I0229 18:22:07.722360   30631 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0229 18:22:07.722371   30631 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0229 18:22:07.722389   30631 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0229 18:22:07.722401   30631 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0229 18:22:07.722411   30631 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0229 18:22:07.722423   30631 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0229 18:22:07.722432   30631 command_runner.go:130] > [crio.runtime]
	I0229 18:22:07.722441   30631 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0229 18:22:07.722453   30631 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0229 18:22:07.722464   30631 command_runner.go:130] > # "nofile=1024:2048"
	I0229 18:22:07.722476   30631 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0229 18:22:07.722482   30631 command_runner.go:130] > # default_ulimits = [
	I0229 18:22:07.722488   30631 command_runner.go:130] > # ]
	I0229 18:22:07.722493   30631 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0229 18:22:07.722499   30631 command_runner.go:130] > # no_pivot = false
	I0229 18:22:07.722507   30631 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0229 18:22:07.722516   30631 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0229 18:22:07.722520   30631 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0229 18:22:07.722528   30631 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0229 18:22:07.722533   30631 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0229 18:22:07.722542   30631 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0229 18:22:07.722546   30631 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0229 18:22:07.722551   30631 command_runner.go:130] > # Cgroup setting for conmon
	I0229 18:22:07.722558   30631 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0229 18:22:07.722564   30631 command_runner.go:130] > conmon_cgroup = "pod"
	I0229 18:22:07.722570   30631 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0229 18:22:07.722577   30631 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0229 18:22:07.722583   30631 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0229 18:22:07.722589   30631 command_runner.go:130] > conmon_env = [
	I0229 18:22:07.722595   30631 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0229 18:22:07.722601   30631 command_runner.go:130] > ]
	I0229 18:22:07.722609   30631 command_runner.go:130] > # Additional environment variables to set for all the
	I0229 18:22:07.722620   30631 command_runner.go:130] > # containers. These are overridden if set in the
	I0229 18:22:07.722630   30631 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0229 18:22:07.722638   30631 command_runner.go:130] > # default_env = [
	I0229 18:22:07.722643   30631 command_runner.go:130] > # ]
	I0229 18:22:07.722656   30631 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0229 18:22:07.722672   30631 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0229 18:22:07.722681   30631 command_runner.go:130] > # selinux = false
	I0229 18:22:07.722693   30631 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0229 18:22:07.722706   30631 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0229 18:22:07.722719   30631 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0229 18:22:07.722730   30631 command_runner.go:130] > # seccomp_profile = ""
	I0229 18:22:07.722741   30631 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0229 18:22:07.722754   30631 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0229 18:22:07.722765   30631 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0229 18:22:07.722788   30631 command_runner.go:130] > # which might increase security.
	I0229 18:22:07.722799   30631 command_runner.go:130] > # This option is currently deprecated,
	I0229 18:22:07.722809   30631 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0229 18:22:07.722820   30631 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0229 18:22:07.722833   30631 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0229 18:22:07.722845   30631 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0229 18:22:07.722858   30631 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0229 18:22:07.722870   30631 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0229 18:22:07.722882   30631 command_runner.go:130] > # This option supports live configuration reload.
	I0229 18:22:07.722892   30631 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0229 18:22:07.722905   30631 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0229 18:22:07.722915   30631 command_runner.go:130] > # the cgroup blockio controller.
	I0229 18:22:07.722924   30631 command_runner.go:130] > # blockio_config_file = ""
	I0229 18:22:07.722938   30631 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0229 18:22:07.722946   30631 command_runner.go:130] > # blockio parameters.
	I0229 18:22:07.722952   30631 command_runner.go:130] > # blockio_reload = false
	I0229 18:22:07.722965   30631 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0229 18:22:07.722976   30631 command_runner.go:130] > # irqbalance daemon.
	I0229 18:22:07.722984   30631 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0229 18:22:07.722998   30631 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0229 18:22:07.723013   30631 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0229 18:22:07.723037   30631 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0229 18:22:07.723050   30631 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0229 18:22:07.723060   30631 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0229 18:22:07.723072   30631 command_runner.go:130] > # This option supports live configuration reload.
	I0229 18:22:07.723083   30631 command_runner.go:130] > # rdt_config_file = ""
	I0229 18:22:07.723093   30631 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0229 18:22:07.723103   30631 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0229 18:22:07.723123   30631 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0229 18:22:07.723132   30631 command_runner.go:130] > # separate_pull_cgroup = ""
	I0229 18:22:07.723143   30631 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0229 18:22:07.723155   30631 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0229 18:22:07.723164   30631 command_runner.go:130] > # will be added.
	I0229 18:22:07.723172   30631 command_runner.go:130] > # default_capabilities = [
	I0229 18:22:07.723181   30631 command_runner.go:130] > # 	"CHOWN",
	I0229 18:22:07.723189   30631 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0229 18:22:07.723198   30631 command_runner.go:130] > # 	"FSETID",
	I0229 18:22:07.723203   30631 command_runner.go:130] > # 	"FOWNER",
	I0229 18:22:07.723210   30631 command_runner.go:130] > # 	"SETGID",
	I0229 18:22:07.723217   30631 command_runner.go:130] > # 	"SETUID",
	I0229 18:22:07.723227   30631 command_runner.go:130] > # 	"SETPCAP",
	I0229 18:22:07.723234   30631 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0229 18:22:07.723245   30631 command_runner.go:130] > # 	"KILL",
	I0229 18:22:07.723250   30631 command_runner.go:130] > # ]
	I0229 18:22:07.723265   30631 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0229 18:22:07.723278   30631 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0229 18:22:07.723288   30631 command_runner.go:130] > # add_inheritable_capabilities = false
	I0229 18:22:07.723298   30631 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0229 18:22:07.723311   30631 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0229 18:22:07.723318   30631 command_runner.go:130] > # default_sysctls = [
	I0229 18:22:07.723323   30631 command_runner.go:130] > # ]
	I0229 18:22:07.723335   30631 command_runner.go:130] > # List of devices on the host that a
	I0229 18:22:07.723348   30631 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0229 18:22:07.723359   30631 command_runner.go:130] > # allowed_devices = [
	I0229 18:22:07.723368   30631 command_runner.go:130] > # 	"/dev/fuse",
	I0229 18:22:07.723379   30631 command_runner.go:130] > # ]
	I0229 18:22:07.723391   30631 command_runner.go:130] > # List of additional devices. specified as
	I0229 18:22:07.723405   30631 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0229 18:22:07.723415   30631 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0229 18:22:07.723425   30631 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0229 18:22:07.723435   30631 command_runner.go:130] > # additional_devices = [
	I0229 18:22:07.723441   30631 command_runner.go:130] > # ]
	I0229 18:22:07.723449   30631 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0229 18:22:07.723460   30631 command_runner.go:130] > # cdi_spec_dirs = [
	I0229 18:22:07.723468   30631 command_runner.go:130] > # 	"/etc/cdi",
	I0229 18:22:07.723472   30631 command_runner.go:130] > # 	"/var/run/cdi",
	I0229 18:22:07.723478   30631 command_runner.go:130] > # ]
	I0229 18:22:07.723484   30631 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0229 18:22:07.723493   30631 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0229 18:22:07.723499   30631 command_runner.go:130] > # Defaults to false.
	I0229 18:22:07.723506   30631 command_runner.go:130] > # device_ownership_from_security_context = false
	I0229 18:22:07.723515   30631 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0229 18:22:07.723521   30631 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0229 18:22:07.723527   30631 command_runner.go:130] > # hooks_dir = [
	I0229 18:22:07.723531   30631 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0229 18:22:07.723535   30631 command_runner.go:130] > # ]
	I0229 18:22:07.723541   30631 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0229 18:22:07.723553   30631 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0229 18:22:07.723600   30631 command_runner.go:130] > # its default mounts from the following two files:
	I0229 18:22:07.723613   30631 command_runner.go:130] > #
	I0229 18:22:07.723624   30631 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0229 18:22:07.723634   30631 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0229 18:22:07.723647   30631 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0229 18:22:07.723655   30631 command_runner.go:130] > #
	I0229 18:22:07.723667   30631 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0229 18:22:07.723681   30631 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0229 18:22:07.723694   30631 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0229 18:22:07.723705   30631 command_runner.go:130] > #      only add mounts it finds in this file.
	I0229 18:22:07.723712   30631 command_runner.go:130] > #
	I0229 18:22:07.723719   30631 command_runner.go:130] > # default_mounts_file = ""
	I0229 18:22:07.723728   30631 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0229 18:22:07.723742   30631 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0229 18:22:07.723751   30631 command_runner.go:130] > pids_limit = 1024
	I0229 18:22:07.723761   30631 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0229 18:22:07.723774   30631 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0229 18:22:07.723787   30631 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0229 18:22:07.723803   30631 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0229 18:22:07.723812   30631 command_runner.go:130] > # log_size_max = -1
	I0229 18:22:07.723823   30631 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0229 18:22:07.723834   30631 command_runner.go:130] > # log_to_journald = false
	I0229 18:22:07.723841   30631 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0229 18:22:07.723847   30631 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0229 18:22:07.723855   30631 command_runner.go:130] > # Path to directory for container attach sockets.
	I0229 18:22:07.723864   30631 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0229 18:22:07.723876   30631 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0229 18:22:07.723884   30631 command_runner.go:130] > # bind_mount_prefix = ""
	I0229 18:22:07.723896   30631 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0229 18:22:07.723909   30631 command_runner.go:130] > # read_only = false
	I0229 18:22:07.723922   30631 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0229 18:22:07.723937   30631 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0229 18:22:07.723944   30631 command_runner.go:130] > # live configuration reload.
	I0229 18:22:07.723952   30631 command_runner.go:130] > # log_level = "info"
	I0229 18:22:07.723963   30631 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0229 18:22:07.723974   30631 command_runner.go:130] > # This option supports live configuration reload.
	I0229 18:22:07.723983   30631 command_runner.go:130] > # log_filter = ""
	I0229 18:22:07.723992   30631 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0229 18:22:07.724003   30631 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0229 18:22:07.724011   30631 command_runner.go:130] > # separated by comma.
	I0229 18:22:07.724024   30631 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0229 18:22:07.724034   30631 command_runner.go:130] > # uid_mappings = ""
	I0229 18:22:07.724044   30631 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0229 18:22:07.724056   30631 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0229 18:22:07.724066   30631 command_runner.go:130] > # separated by comma.
	I0229 18:22:07.724078   30631 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0229 18:22:07.724085   30631 command_runner.go:130] > # gid_mappings = ""
	I0229 18:22:07.724093   30631 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0229 18:22:07.724106   30631 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0229 18:22:07.724119   30631 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0229 18:22:07.724131   30631 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0229 18:22:07.724141   30631 command_runner.go:130] > # minimum_mappable_uid = -1
	I0229 18:22:07.724152   30631 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0229 18:22:07.724164   30631 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0229 18:22:07.724170   30631 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0229 18:22:07.724185   30631 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0229 18:22:07.724195   30631 command_runner.go:130] > # minimum_mappable_gid = -1
	I0229 18:22:07.724206   30631 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0229 18:22:07.724219   30631 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0229 18:22:07.724232   30631 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0229 18:22:07.724239   30631 command_runner.go:130] > # ctr_stop_timeout = 30
	I0229 18:22:07.724251   30631 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0229 18:22:07.724264   30631 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0229 18:22:07.724275   30631 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0229 18:22:07.724283   30631 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0229 18:22:07.724294   30631 command_runner.go:130] > drop_infra_ctr = false
	I0229 18:22:07.724306   30631 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0229 18:22:07.724316   30631 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0229 18:22:07.724329   30631 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0229 18:22:07.724338   30631 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0229 18:22:07.724350   30631 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0229 18:22:07.724362   30631 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0229 18:22:07.724380   30631 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0229 18:22:07.724389   30631 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0229 18:22:07.724398   30631 command_runner.go:130] > # shared_cpuset = ""
	I0229 18:22:07.724409   30631 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0229 18:22:07.724421   30631 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0229 18:22:07.724431   30631 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0229 18:22:07.724445   30631 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0229 18:22:07.724456   30631 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0229 18:22:07.724469   30631 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0229 18:22:07.724482   30631 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0229 18:22:07.724490   30631 command_runner.go:130] > # enable_criu_support = false
	I0229 18:22:07.724501   30631 command_runner.go:130] > # Enable/disable the generation of the container,
	I0229 18:22:07.724513   30631 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0229 18:22:07.724524   30631 command_runner.go:130] > # enable_pod_events = false
	I0229 18:22:07.724534   30631 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0229 18:22:07.724548   30631 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0229 18:22:07.724560   30631 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0229 18:22:07.724570   30631 command_runner.go:130] > # default_runtime = "runc"
	I0229 18:22:07.724579   30631 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0229 18:22:07.724593   30631 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0229 18:22:07.724611   30631 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0229 18:22:07.724622   30631 command_runner.go:130] > # creation as a file is not desired either.
	I0229 18:22:07.724636   30631 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0229 18:22:07.724648   30631 command_runner.go:130] > # the hostname is being managed dynamically.
	I0229 18:22:07.724659   30631 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0229 18:22:07.724664   30631 command_runner.go:130] > # ]
	I0229 18:22:07.724676   30631 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0229 18:22:07.724689   30631 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0229 18:22:07.724702   30631 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0229 18:22:07.724715   30631 command_runner.go:130] > # Each entry in the table should follow the format:
	I0229 18:22:07.724723   30631 command_runner.go:130] > #
	I0229 18:22:07.724731   30631 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0229 18:22:07.724742   30631 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0229 18:22:07.724749   30631 command_runner.go:130] > # runtime_type = "oci"
	I0229 18:22:07.724779   30631 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0229 18:22:07.724790   30631 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0229 18:22:07.724798   30631 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0229 18:22:07.724809   30631 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0229 18:22:07.724817   30631 command_runner.go:130] > # monitor_env = []
	I0229 18:22:07.724828   30631 command_runner.go:130] > # privileged_without_host_devices = false
	I0229 18:22:07.724837   30631 command_runner.go:130] > # allowed_annotations = []
	I0229 18:22:07.724846   30631 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0229 18:22:07.724853   30631 command_runner.go:130] > # Where:
	I0229 18:22:07.724859   30631 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0229 18:22:07.724872   30631 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0229 18:22:07.724886   30631 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0229 18:22:07.724899   30631 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0229 18:22:07.724909   30631 command_runner.go:130] > #   in $PATH.
	I0229 18:22:07.724919   30631 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0229 18:22:07.724930   30631 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0229 18:22:07.724941   30631 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0229 18:22:07.724945   30631 command_runner.go:130] > #   state.
	I0229 18:22:07.724952   30631 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0229 18:22:07.724965   30631 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0229 18:22:07.724979   30631 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0229 18:22:07.724990   30631 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0229 18:22:07.725001   30631 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0229 18:22:07.725015   30631 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0229 18:22:07.725025   30631 command_runner.go:130] > #   The currently recognized values are:
	I0229 18:22:07.725032   30631 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0229 18:22:07.725045   30631 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0229 18:22:07.725058   30631 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0229 18:22:07.725071   30631 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0229 18:22:07.725086   30631 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0229 18:22:07.725100   30631 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0229 18:22:07.725114   30631 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0229 18:22:07.725124   30631 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0229 18:22:07.725135   30631 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0229 18:22:07.725148   30631 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0229 18:22:07.725156   30631 command_runner.go:130] > #   deprecated option "conmon".
	I0229 18:22:07.725171   30631 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0229 18:22:07.725183   30631 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0229 18:22:07.725196   30631 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0229 18:22:07.725204   30631 command_runner.go:130] > #   should be moved to the container's cgroup
	I0229 18:22:07.725212   30631 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0229 18:22:07.725223   30631 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0229 18:22:07.725237   30631 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0229 18:22:07.725248   30631 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0229 18:22:07.725253   30631 command_runner.go:130] > #
	I0229 18:22:07.725263   30631 command_runner.go:130] > # Using the seccomp notifier feature:
	I0229 18:22:07.725268   30631 command_runner.go:130] > #
	I0229 18:22:07.725280   30631 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0229 18:22:07.725290   30631 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0229 18:22:07.725293   30631 command_runner.go:130] > #
	I0229 18:22:07.725302   30631 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0229 18:22:07.725317   30631 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0229 18:22:07.725322   30631 command_runner.go:130] > #
	I0229 18:22:07.725335   30631 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0229 18:22:07.725344   30631 command_runner.go:130] > # feature.
	I0229 18:22:07.725349   30631 command_runner.go:130] > #
	I0229 18:22:07.725361   30631 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0229 18:22:07.725371   30631 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0229 18:22:07.725385   30631 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0229 18:22:07.725395   30631 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0229 18:22:07.725409   30631 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0229 18:22:07.725417   30631 command_runner.go:130] > #
	I0229 18:22:07.725426   30631 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0229 18:22:07.725438   30631 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0229 18:22:07.725446   30631 command_runner.go:130] > #
	I0229 18:22:07.725454   30631 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0229 18:22:07.725462   30631 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0229 18:22:07.725468   30631 command_runner.go:130] > #
	I0229 18:22:07.725480   30631 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0229 18:22:07.725494   30631 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0229 18:22:07.725502   30631 command_runner.go:130] > # limitation.
	I0229 18:22:07.725509   30631 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0229 18:22:07.725519   30631 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0229 18:22:07.725529   30631 command_runner.go:130] > runtime_type = "oci"
	I0229 18:22:07.725535   30631 command_runner.go:130] > runtime_root = "/run/runc"
	I0229 18:22:07.725544   30631 command_runner.go:130] > runtime_config_path = ""
	I0229 18:22:07.725549   30631 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0229 18:22:07.725555   30631 command_runner.go:130] > monitor_cgroup = "pod"
	I0229 18:22:07.725561   30631 command_runner.go:130] > monitor_exec_cgroup = ""
	I0229 18:22:07.725571   30631 command_runner.go:130] > monitor_env = [
	I0229 18:22:07.725580   30631 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0229 18:22:07.725588   30631 command_runner.go:130] > ]
	I0229 18:22:07.725596   30631 command_runner.go:130] > privileged_without_host_devices = false
	I0229 18:22:07.725609   30631 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0229 18:22:07.725621   30631 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0229 18:22:07.725631   30631 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0229 18:22:07.725639   30631 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0229 18:22:07.725654   30631 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0229 18:22:07.725666   30631 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0229 18:22:07.725684   30631 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0229 18:22:07.725699   30631 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0229 18:22:07.725711   30631 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0229 18:22:07.725718   30631 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0229 18:22:07.725725   30631 command_runner.go:130] > # Example:
	I0229 18:22:07.725733   30631 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0229 18:22:07.725744   30631 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0229 18:22:07.725756   30631 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0229 18:22:07.725766   30631 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0229 18:22:07.725772   30631 command_runner.go:130] > # cpuset = 0
	I0229 18:22:07.725782   30631 command_runner.go:130] > # cpushares = "0-1"
	I0229 18:22:07.725788   30631 command_runner.go:130] > # Where:
	I0229 18:22:07.725800   30631 command_runner.go:130] > # The workload name is workload-type.
	I0229 18:22:07.725811   30631 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0229 18:22:07.725821   30631 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0229 18:22:07.725834   30631 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0229 18:22:07.725847   30631 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0229 18:22:07.725860   30631 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0229 18:22:07.725870   30631 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0229 18:22:07.725881   30631 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0229 18:22:07.725890   30631 command_runner.go:130] > # Default value is set to true
	I0229 18:22:07.725894   30631 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0229 18:22:07.725900   30631 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0229 18:22:07.725905   30631 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0229 18:22:07.725910   30631 command_runner.go:130] > # Default value is set to 'false'
	I0229 18:22:07.725931   30631 command_runner.go:130] > # disable_hostport_mapping = false
	I0229 18:22:07.725945   30631 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0229 18:22:07.725952   30631 command_runner.go:130] > #
	I0229 18:22:07.725962   30631 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0229 18:22:07.725975   30631 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0229 18:22:07.725987   30631 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0229 18:22:07.725999   30631 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0229 18:22:07.726014   30631 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0229 18:22:07.726022   30631 command_runner.go:130] > [crio.image]
	I0229 18:22:07.726032   30631 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0229 18:22:07.726045   30631 command_runner.go:130] > # default_transport = "docker://"
	I0229 18:22:07.726056   30631 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0229 18:22:07.726066   30631 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0229 18:22:07.726071   30631 command_runner.go:130] > # global_auth_file = ""
	I0229 18:22:07.726079   30631 command_runner.go:130] > # The image used to instantiate infra containers.
	I0229 18:22:07.726086   30631 command_runner.go:130] > # This option supports live configuration reload.
	I0229 18:22:07.726095   30631 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0229 18:22:07.726106   30631 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0229 18:22:07.726124   30631 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0229 18:22:07.726132   30631 command_runner.go:130] > # This option supports live configuration reload.
	I0229 18:22:07.726139   30631 command_runner.go:130] > # pause_image_auth_file = ""
	I0229 18:22:07.726151   30631 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0229 18:22:07.726160   30631 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0229 18:22:07.726173   30631 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0229 18:22:07.726183   30631 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0229 18:22:07.726194   30631 command_runner.go:130] > # pause_command = "/pause"
	I0229 18:22:07.726207   30631 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0229 18:22:07.726219   30631 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0229 18:22:07.726231   30631 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0229 18:22:07.726239   30631 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0229 18:22:07.726245   30631 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0229 18:22:07.726253   30631 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0229 18:22:07.726257   30631 command_runner.go:130] > # pinned_images = [
	I0229 18:22:07.726262   30631 command_runner.go:130] > # ]
	I0229 18:22:07.726267   30631 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0229 18:22:07.726273   30631 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0229 18:22:07.726281   30631 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0229 18:22:07.726287   30631 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0229 18:22:07.726294   30631 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0229 18:22:07.726298   30631 command_runner.go:130] > # signature_policy = ""
	I0229 18:22:07.726306   30631 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0229 18:22:07.726312   30631 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0229 18:22:07.726320   30631 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0229 18:22:07.726325   30631 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0229 18:22:07.726331   30631 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0229 18:22:07.726338   30631 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0229 18:22:07.726343   30631 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0229 18:22:07.726349   30631 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0229 18:22:07.726355   30631 command_runner.go:130] > # changing them here.
	I0229 18:22:07.726359   30631 command_runner.go:130] > # insecure_registries = [
	I0229 18:22:07.726363   30631 command_runner.go:130] > # ]
	I0229 18:22:07.726369   30631 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0229 18:22:07.726380   30631 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0229 18:22:07.726385   30631 command_runner.go:130] > # image_volumes = "mkdir"
	I0229 18:22:07.726389   30631 command_runner.go:130] > # Temporary directory to use for storing big files
	I0229 18:22:07.726394   30631 command_runner.go:130] > # big_files_temporary_dir = ""
	I0229 18:22:07.726400   30631 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0229 18:22:07.726404   30631 command_runner.go:130] > # CNI plugins.
	I0229 18:22:07.726410   30631 command_runner.go:130] > [crio.network]
	I0229 18:22:07.726416   30631 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0229 18:22:07.726423   30631 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0229 18:22:07.726428   30631 command_runner.go:130] > # cni_default_network = ""
	I0229 18:22:07.726436   30631 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0229 18:22:07.726441   30631 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0229 18:22:07.726450   30631 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0229 18:22:07.726453   30631 command_runner.go:130] > # plugin_dirs = [
	I0229 18:22:07.726459   30631 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0229 18:22:07.726462   30631 command_runner.go:130] > # ]
	I0229 18:22:07.726467   30631 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0229 18:22:07.726473   30631 command_runner.go:130] > [crio.metrics]
	I0229 18:22:07.726477   30631 command_runner.go:130] > # Globally enable or disable metrics support.
	I0229 18:22:07.726481   30631 command_runner.go:130] > enable_metrics = true
	I0229 18:22:07.726486   30631 command_runner.go:130] > # Specify enabled metrics collectors.
	I0229 18:22:07.726493   30631 command_runner.go:130] > # Per default all metrics are enabled.
	I0229 18:22:07.726498   30631 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0229 18:22:07.726506   30631 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0229 18:22:07.726514   30631 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0229 18:22:07.726520   30631 command_runner.go:130] > # metrics_collectors = [
	I0229 18:22:07.726524   30631 command_runner.go:130] > # 	"operations",
	I0229 18:22:07.726528   30631 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0229 18:22:07.726534   30631 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0229 18:22:07.726538   30631 command_runner.go:130] > # 	"operations_errors",
	I0229 18:22:07.726544   30631 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0229 18:22:07.726548   30631 command_runner.go:130] > # 	"image_pulls_by_name",
	I0229 18:22:07.726553   30631 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0229 18:22:07.726559   30631 command_runner.go:130] > # 	"image_pulls_failures",
	I0229 18:22:07.726563   30631 command_runner.go:130] > # 	"image_pulls_successes",
	I0229 18:22:07.726567   30631 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0229 18:22:07.726573   30631 command_runner.go:130] > # 	"image_layer_reuse",
	I0229 18:22:07.726577   30631 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0229 18:22:07.726582   30631 command_runner.go:130] > # 	"containers_oom_total",
	I0229 18:22:07.726588   30631 command_runner.go:130] > # 	"containers_oom",
	I0229 18:22:07.726592   30631 command_runner.go:130] > # 	"processes_defunct",
	I0229 18:22:07.726598   30631 command_runner.go:130] > # 	"operations_total",
	I0229 18:22:07.726602   30631 command_runner.go:130] > # 	"operations_latency_seconds",
	I0229 18:22:07.726609   30631 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0229 18:22:07.726613   30631 command_runner.go:130] > # 	"operations_errors_total",
	I0229 18:22:07.726621   30631 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0229 18:22:07.726626   30631 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0229 18:22:07.726632   30631 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0229 18:22:07.726637   30631 command_runner.go:130] > # 	"image_pulls_success_total",
	I0229 18:22:07.726641   30631 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0229 18:22:07.726646   30631 command_runner.go:130] > # 	"containers_oom_count_total",
	I0229 18:22:07.726652   30631 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0229 18:22:07.726656   30631 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0229 18:22:07.726662   30631 command_runner.go:130] > # ]
	I0229 18:22:07.726667   30631 command_runner.go:130] > # The port on which the metrics server will listen.
	I0229 18:22:07.726672   30631 command_runner.go:130] > # metrics_port = 9090
	I0229 18:22:07.726680   30631 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0229 18:22:07.726683   30631 command_runner.go:130] > # metrics_socket = ""
	I0229 18:22:07.726689   30631 command_runner.go:130] > # The certificate for the secure metrics server.
	I0229 18:22:07.726697   30631 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0229 18:22:07.726705   30631 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0229 18:22:07.726710   30631 command_runner.go:130] > # certificate on any modification event.
	I0229 18:22:07.726716   30631 command_runner.go:130] > # metrics_cert = ""
	I0229 18:22:07.726721   30631 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0229 18:22:07.726727   30631 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0229 18:22:07.726731   30631 command_runner.go:130] > # metrics_key = ""
	I0229 18:22:07.726737   30631 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0229 18:22:07.726741   30631 command_runner.go:130] > [crio.tracing]
	I0229 18:22:07.726746   30631 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0229 18:22:07.726752   30631 command_runner.go:130] > # enable_tracing = false
	I0229 18:22:07.726757   30631 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0229 18:22:07.726762   30631 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0229 18:22:07.726768   30631 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0229 18:22:07.726775   30631 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0229 18:22:07.726779   30631 command_runner.go:130] > # CRI-O NRI configuration.
	I0229 18:22:07.726783   30631 command_runner.go:130] > [crio.nri]
	I0229 18:22:07.726787   30631 command_runner.go:130] > # Globally enable or disable NRI.
	I0229 18:22:07.726793   30631 command_runner.go:130] > # enable_nri = false
	I0229 18:22:07.726798   30631 command_runner.go:130] > # NRI socket to listen on.
	I0229 18:22:07.726804   30631 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0229 18:22:07.726809   30631 command_runner.go:130] > # NRI plugin directory to use.
	I0229 18:22:07.726816   30631 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0229 18:22:07.726823   30631 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0229 18:22:07.726828   30631 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0229 18:22:07.726834   30631 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0229 18:22:07.726840   30631 command_runner.go:130] > # nri_disable_connections = false
	I0229 18:22:07.726845   30631 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0229 18:22:07.726852   30631 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0229 18:22:07.726857   30631 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0229 18:22:07.726863   30631 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0229 18:22:07.726869   30631 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0229 18:22:07.726875   30631 command_runner.go:130] > [crio.stats]
	I0229 18:22:07.726881   30631 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0229 18:22:07.726888   30631 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0229 18:22:07.726892   30631 command_runner.go:130] > # stats_collection_period = 0
	I0229 18:22:07.726933   30631 command_runner.go:130] ! time="2024-02-29 18:22:07.703973868Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0229 18:22:07.726952   30631 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0229 18:22:07.727016   30631 cni.go:84] Creating CNI manager for ""
	I0229 18:22:07.727045   30631 cni.go:136] 3 nodes found, recommending kindnet
	I0229 18:22:07.727056   30631 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 18:22:07.727078   30631 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.78 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-051105 NodeName:multinode-051105-m03 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.200"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.78 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 18:22:07.727187   30631 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.78
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-051105-m03"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.78
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.200"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 18:22:07.727230   30631 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-051105-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.78
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-051105 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 18:22:07.727279   30631 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0229 18:22:07.738906   30631 command_runner.go:130] > kubeadm
	I0229 18:22:07.738940   30631 command_runner.go:130] > kubectl
	I0229 18:22:07.738944   30631 command_runner.go:130] > kubelet
	I0229 18:22:07.738970   30631 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 18:22:07.739013   30631 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0229 18:22:07.749423   30631 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0229 18:22:07.769809   30631 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 18:22:07.789039   30631 ssh_runner.go:195] Run: grep 192.168.39.200	control-plane.minikube.internal$ /etc/hosts
	I0229 18:22:07.793720   30631 command_runner.go:130] > 192.168.39.200	control-plane.minikube.internal
	I0229 18:22:07.793777   30631 host.go:66] Checking if "multinode-051105" exists ...
	I0229 18:22:07.794053   30631 config.go:182] Loaded profile config "multinode-051105": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 18:22:07.794096   30631 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 18:22:07.794128   30631 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:22:07.808918   30631 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43171
	I0229 18:22:07.809264   30631 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:22:07.809700   30631 main.go:141] libmachine: Using API Version  1
	I0229 18:22:07.809722   30631 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:22:07.810038   30631 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:22:07.810212   30631 main.go:141] libmachine: (multinode-051105) Calling .DriverName
	I0229 18:22:07.810355   30631 start.go:304] JoinCluster: &{Name:multinode-051105 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:multinode-051105 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.104 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.78 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:22:07.810514   30631 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0229 18:22:07.810537   30631 main.go:141] libmachine: (multinode-051105) Calling .GetSSHHostname
	I0229 18:22:07.813318   30631 main.go:141] libmachine: (multinode-051105) DBG | domain multinode-051105 has defined MAC address 52:54:00:58:1f:e6 in network mk-multinode-051105
	I0229 18:22:07.813749   30631 main.go:141] libmachine: (multinode-051105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:1f:e6", ip: ""} in network mk-multinode-051105: {Iface:virbr1 ExpiryTime:2024-02-29 19:17:58 +0000 UTC Type:0 Mac:52:54:00:58:1f:e6 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:multinode-051105 Clientid:01:52:54:00:58:1f:e6}
	I0229 18:22:07.813774   30631 main.go:141] libmachine: (multinode-051105) DBG | domain multinode-051105 has defined IP address 192.168.39.200 and MAC address 52:54:00:58:1f:e6 in network mk-multinode-051105
	I0229 18:22:07.813909   30631 main.go:141] libmachine: (multinode-051105) Calling .GetSSHPort
	I0229 18:22:07.814065   30631 main.go:141] libmachine: (multinode-051105) Calling .GetSSHKeyPath
	I0229 18:22:07.814216   30631 main.go:141] libmachine: (multinode-051105) Calling .GetSSHUsername
	I0229 18:22:07.814317   30631 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/multinode-051105/id_rsa Username:docker}
	I0229 18:22:07.986981   30631 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 3e3ink.veu09ekean29ai3c --discovery-token-ca-cert-hash sha256:aeecb2c9da879b7c90530698bac219eb9b0e3de8de59b6b65ba867f984e81782 
	I0229 18:22:07.987081   30631 start.go:317] removing existing worker node "m03" before attempting to rejoin cluster: &{Name:m03 IP:192.168.39.78 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0229 18:22:07.987127   30631 host.go:66] Checking if "multinode-051105" exists ...
	I0229 18:22:07.987407   30631 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 18:22:07.987450   30631 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:22:08.002241   30631 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34817
	I0229 18:22:08.002626   30631 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:22:08.003039   30631 main.go:141] libmachine: Using API Version  1
	I0229 18:22:08.003056   30631 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:22:08.003389   30631 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:22:08.003561   30631 main.go:141] libmachine: (multinode-051105) Calling .DriverName
	I0229 18:22:08.003752   30631 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-051105-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0229 18:22:08.003774   30631 main.go:141] libmachine: (multinode-051105) Calling .GetSSHHostname
	I0229 18:22:08.006639   30631 main.go:141] libmachine: (multinode-051105) DBG | domain multinode-051105 has defined MAC address 52:54:00:58:1f:e6 in network mk-multinode-051105
	I0229 18:22:08.007065   30631 main.go:141] libmachine: (multinode-051105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:1f:e6", ip: ""} in network mk-multinode-051105: {Iface:virbr1 ExpiryTime:2024-02-29 19:17:58 +0000 UTC Type:0 Mac:52:54:00:58:1f:e6 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:multinode-051105 Clientid:01:52:54:00:58:1f:e6}
	I0229 18:22:08.007094   30631 main.go:141] libmachine: (multinode-051105) DBG | domain multinode-051105 has defined IP address 192.168.39.200 and MAC address 52:54:00:58:1f:e6 in network mk-multinode-051105
	I0229 18:22:08.007231   30631 main.go:141] libmachine: (multinode-051105) Calling .GetSSHPort
	I0229 18:22:08.007417   30631 main.go:141] libmachine: (multinode-051105) Calling .GetSSHKeyPath
	I0229 18:22:08.007574   30631 main.go:141] libmachine: (multinode-051105) Calling .GetSSHUsername
	I0229 18:22:08.007695   30631 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/multinode-051105/id_rsa Username:docker}
	I0229 18:22:08.197981   30631 command_runner.go:130] > node/multinode-051105-m03 cordoned
	I0229 18:22:11.238579   30631 command_runner.go:130] > pod "busybox-5b5d89c9d6-25wqb" has DeletionTimestamp older than 1 seconds, skipping
	I0229 18:22:11.238609   30631 command_runner.go:130] > node/multinode-051105-m03 drained
	I0229 18:22:11.240159   30631 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0229 18:22:11.240186   30631 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-kvkf2, kube-system/kube-proxy-jfw9f
	I0229 18:22:11.240214   30631 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-051105-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.236433449s)
	I0229 18:22:11.240230   30631 node.go:108] successfully drained node "m03"
	I0229 18:22:11.240619   30631 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18259-6428/kubeconfig
	I0229 18:22:11.240920   30631 kapi.go:59] client config for multinode-051105: &rest.Config{Host:"https://192.168.39.200:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18259-6428/.minikube/profiles/multinode-051105/client.crt", KeyFile:"/home/jenkins/minikube-integration/18259-6428/.minikube/profiles/multinode-051105/client.key", CAFile:"/home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5d0e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 18:22:11.241229   30631 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0229 18:22:11.241279   30631 round_trippers.go:463] DELETE https://192.168.39.200:8443/api/v1/nodes/multinode-051105-m03
	I0229 18:22:11.241289   30631 round_trippers.go:469] Request Headers:
	I0229 18:22:11.241299   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:22:11.241306   30631 round_trippers.go:473]     Content-Type: application/json
	I0229 18:22:11.241310   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:22:11.254811   30631 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0229 18:22:11.254836   30631 round_trippers.go:577] Response Headers:
	I0229 18:22:11.254847   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:22:11.254857   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:22:11.254864   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:22:11.254869   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:22:11.254873   30631 round_trippers.go:580]     Content-Length: 171
	I0229 18:22:11.254879   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:22:11 GMT
	I0229 18:22:11.254886   30631 round_trippers.go:580]     Audit-Id: e636672f-0779-4b50-a8de-383e3c29c64c
	I0229 18:22:11.254917   30631 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-051105-m03","kind":"nodes","uid":"2aa133ce-8b37-4464-acdc-adffba00e813"}}
	I0229 18:22:11.254954   30631 node.go:124] successfully deleted node "m03"
	I0229 18:22:11.254966   30631 start.go:321] successfully removed existing worker node "m03" from cluster: &{Name:m03 IP:192.168.39.78 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0229 18:22:11.254995   30631 start.go:325] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.39.78 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0229 18:22:11.255019   30631 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 3e3ink.veu09ekean29ai3c --discovery-token-ca-cert-hash sha256:aeecb2c9da879b7c90530698bac219eb9b0e3de8de59b6b65ba867f984e81782 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-051105-m03"
	I0229 18:22:11.312773   30631 command_runner.go:130] > [preflight] Running pre-flight checks
	I0229 18:22:11.482306   30631 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0229 18:22:11.482340   30631 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0229 18:22:11.542806   30631 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 18:22:11.543056   30631 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 18:22:11.543077   30631 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0229 18:22:11.686969   30631 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0229 18:22:12.211401   30631 command_runner.go:130] > This node has joined the cluster:
	I0229 18:22:12.211430   30631 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0229 18:22:12.211440   30631 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0229 18:22:12.211450   30631 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0229 18:22:12.213925   30631 command_runner.go:130] ! W0229 18:22:11.303989    2306 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0229 18:22:12.213951   30631 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0229 18:22:12.213960   30631 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0229 18:22:12.213974   30631 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0229 18:22:12.214003   30631 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0229 18:22:12.491332   30631 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f9e09574fdbf0719156a1e892f7aeb8b71f0cf19 minikube.k8s.io/name=multinode-051105 minikube.k8s.io/updated_at=2024_02_29T18_22_12_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:22:12.585272   30631 command_runner.go:130] > node/multinode-051105-m02 labeled
	I0229 18:22:12.595548   30631 command_runner.go:130] > node/multinode-051105-m03 labeled
	I0229 18:22:12.597180   30631 start.go:306] JoinCluster complete in 4.786821963s
	I0229 18:22:12.597211   30631 cni.go:84] Creating CNI manager for ""
	I0229 18:22:12.597219   30631 cni.go:136] 3 nodes found, recommending kindnet
	I0229 18:22:12.597319   30631 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0229 18:22:12.604145   30631 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0229 18:22:12.604170   30631 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0229 18:22:12.604178   30631 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0229 18:22:12.604187   30631 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0229 18:22:12.604195   30631 command_runner.go:130] > Access: 2024-02-29 18:17:58.604411532 +0000
	I0229 18:22:12.604207   30631 command_runner.go:130] > Modify: 2024-02-23 03:39:37.000000000 +0000
	I0229 18:22:12.604221   30631 command_runner.go:130] > Change: 2024-02-29 18:17:57.283411532 +0000
	I0229 18:22:12.604231   30631 command_runner.go:130] >  Birth: -
	I0229 18:22:12.604268   30631 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0229 18:22:12.604280   30631 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0229 18:22:12.631186   30631 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0229 18:22:13.015700   30631 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0229 18:22:13.015730   30631 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0229 18:22:13.015744   30631 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0229 18:22:13.015751   30631 command_runner.go:130] > daemonset.apps/kindnet configured
	I0229 18:22:13.016089   30631 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18259-6428/kubeconfig
	I0229 18:22:13.016362   30631 kapi.go:59] client config for multinode-051105: &rest.Config{Host:"https://192.168.39.200:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18259-6428/.minikube/profiles/multinode-051105/client.crt", KeyFile:"/home/jenkins/minikube-integration/18259-6428/.minikube/profiles/multinode-051105/client.key", CAFile:"/home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5d0e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 18:22:13.016637   30631 round_trippers.go:463] GET https://192.168.39.200:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0229 18:22:13.016650   30631 round_trippers.go:469] Request Headers:
	I0229 18:22:13.016657   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:22:13.016663   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:22:13.018427   30631 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0229 18:22:13.018446   30631 round_trippers.go:577] Response Headers:
	I0229 18:22:13.018452   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:22:13.018454   30631 round_trippers.go:580]     Content-Length: 291
	I0229 18:22:13.018460   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:22:12 GMT
	I0229 18:22:13.018465   30631 round_trippers.go:580]     Audit-Id: b6eafa09-275a-428a-aa21-02fff704c9ef
	I0229 18:22:13.018467   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:22:13.018471   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:22:13.018478   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:22:13.018493   30631 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"980f57f9-4c9b-43a5-b35c-61bcb3268764","resourceVersion":"962","creationTimestamp":"2024-02-29T18:07:02Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0229 18:22:13.018561   30631 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-051105" context rescaled to 1 replicas
	I0229 18:22:13.018587   30631 start.go:223] Will wait 6m0s for node &{Name:m03 IP:192.168.39.78 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0229 18:22:13.020278   30631 out.go:177] * Verifying Kubernetes components...
	I0229 18:22:13.021375   30631 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 18:22:13.038051   30631 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18259-6428/kubeconfig
	I0229 18:22:13.038244   30631 kapi.go:59] client config for multinode-051105: &rest.Config{Host:"https://192.168.39.200:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18259-6428/.minikube/profiles/multinode-051105/client.crt", KeyFile:"/home/jenkins/minikube-integration/18259-6428/.minikube/profiles/multinode-051105/client.key", CAFile:"/home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5d0e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 18:22:13.038488   30631 node_ready.go:35] waiting up to 6m0s for node "multinode-051105-m03" to be "Ready" ...
	I0229 18:22:13.038563   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/multinode-051105-m03
	I0229 18:22:13.038576   30631 round_trippers.go:469] Request Headers:
	I0229 18:22:13.038586   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:22:13.038592   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:22:13.041505   30631 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:22:13.041528   30631 round_trippers.go:577] Response Headers:
	I0229 18:22:13.041537   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:22:13.041544   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:22:13.041548   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:22:13 GMT
	I0229 18:22:13.041551   30631 round_trippers.go:580]     Audit-Id: 83353b37-3638-4297-aaf8-b6b2a33812cd
	I0229 18:22:13.041558   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:22:13.041562   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:22:13.041945   30631 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-051105-m03","uid":"c8b0486c-a69d-4afd-ae48-91e2c6509d77","resourceVersion":"1295","creationTimestamp":"2024-02-29T18:22:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-051105-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-051105","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T18_22_12_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:22:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 4222 chars]
	I0229 18:22:13.042256   30631 node_ready.go:49] node "multinode-051105-m03" has status "Ready":"True"
	I0229 18:22:13.042272   30631 node_ready.go:38] duration metric: took 3.76762ms waiting for node "multinode-051105-m03" to be "Ready" ...
	I0229 18:22:13.042280   30631 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 18:22:13.042332   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods
	I0229 18:22:13.042344   30631 round_trippers.go:469] Request Headers:
	I0229 18:22:13.042354   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:22:13.042362   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:22:13.048380   30631 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 18:22:13.048394   30631 round_trippers.go:577] Response Headers:
	I0229 18:22:13.048400   30631 round_trippers.go:580]     Audit-Id: 8180c153-0338-4a32-9ab7-bf97703a29a0
	I0229 18:22:13.048403   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:22:13.048406   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:22:13.048409   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:22:13.048415   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:22:13.048421   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:22:13 GMT
	I0229 18:22:13.051541   30631 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1303"},"items":[{"metadata":{"name":"coredns-5dd5756b68-bwhnb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a3853502-49ad-4d24-8c63-3000e4f4aa8e","resourceVersion":"958","creationTimestamp":"2024-02-29T18:07:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"706ca334-c077-4f00-987d-5093fda91a27","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:07:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"706ca334-c077-4f00-987d-5093fda91a27\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 81706 chars]
	I0229 18:22:13.054559   30631 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-bwhnb" in "kube-system" namespace to be "Ready" ...
	I0229 18:22:13.054621   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bwhnb
	I0229 18:22:13.054629   30631 round_trippers.go:469] Request Headers:
	I0229 18:22:13.054635   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:22:13.054638   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:22:13.057244   30631 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:22:13.057262   30631 round_trippers.go:577] Response Headers:
	I0229 18:22:13.057271   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:22:13 GMT
	I0229 18:22:13.057275   30631 round_trippers.go:580]     Audit-Id: 74405207-6c18-470f-a729-e80a1bc27681
	I0229 18:22:13.057279   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:22:13.057283   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:22:13.057288   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:22:13.057291   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:22:13.057874   30631 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bwhnb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a3853502-49ad-4d24-8c63-3000e4f4aa8e","resourceVersion":"958","creationTimestamp":"2024-02-29T18:07:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"706ca334-c077-4f00-987d-5093fda91a27","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:07:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"706ca334-c077-4f00-987d-5093fda91a27\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6226 chars]
	I0229 18:22:13.058300   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/multinode-051105
	I0229 18:22:13.058313   30631 round_trippers.go:469] Request Headers:
	I0229 18:22:13.058320   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:22:13.058323   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:22:13.061439   30631 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 18:22:13.061459   30631 round_trippers.go:577] Response Headers:
	I0229 18:22:13.061468   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:22:13.061474   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:22:13.061478   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:22:13 GMT
	I0229 18:22:13.061483   30631 round_trippers.go:580]     Audit-Id: a21d9a4c-8735-43d7-9de8-df73144788e6
	I0229 18:22:13.061489   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:22:13.061497   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:22:13.061813   30631 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-051105","uid":"614122aa-9203-4f41-a34b-07331562af09","resourceVersion":"975","creationTimestamp":"2024-02-29T18:06:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-051105","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-051105","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_07_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T18:06:59Z","fieldsType":"FieldsV1","fiel [truncated 6496 chars]
	I0229 18:22:13.062199   30631 pod_ready.go:92] pod "coredns-5dd5756b68-bwhnb" in "kube-system" namespace has status "Ready":"True"
	I0229 18:22:13.062216   30631 pod_ready.go:81] duration metric: took 7.636119ms waiting for pod "coredns-5dd5756b68-bwhnb" in "kube-system" namespace to be "Ready" ...
	I0229 18:22:13.062226   30631 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-051105" in "kube-system" namespace to be "Ready" ...
	I0229 18:22:13.062279   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-051105
	I0229 18:22:13.062288   30631 round_trippers.go:469] Request Headers:
	I0229 18:22:13.062294   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:22:13.062300   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:22:13.064148   30631 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0229 18:22:13.064166   30631 round_trippers.go:577] Response Headers:
	I0229 18:22:13.064173   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:22:13.064178   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:22:13.064182   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:22:13.064190   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:22:13.064196   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:22:13 GMT
	I0229 18:22:13.064207   30631 round_trippers.go:580]     Audit-Id: 83c7e7a5-aa23-40ed-9a36-f150e549a732
	I0229 18:22:13.064411   30631 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-051105","namespace":"kube-system","uid":"e73d8125-9770-4ddf-a382-a19adc1ed94f","resourceVersion":"948","creationTimestamp":"2024-02-29T18:07:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.200:2379","kubernetes.io/config.hash":"a3ee17954369c56d68a333413809975f","kubernetes.io/config.mirror":"a3ee17954369c56d68a333413809975f","kubernetes.io/config.seen":"2024-02-29T18:06:55.285569285Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-051105","uid":"614122aa-9203-4f41-a34b-07331562af09","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:07:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5825 chars]
	I0229 18:22:13.064813   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/multinode-051105
	I0229 18:22:13.064830   30631 round_trippers.go:469] Request Headers:
	I0229 18:22:13.064840   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:22:13.064845   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:22:13.066464   30631 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0229 18:22:13.066480   30631 round_trippers.go:577] Response Headers:
	I0229 18:22:13.066489   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:22:13 GMT
	I0229 18:22:13.066493   30631 round_trippers.go:580]     Audit-Id: 37eb1c1a-82eb-4e29-90fa-c16db20a5f12
	I0229 18:22:13.066498   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:22:13.066506   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:22:13.066511   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:22:13.066522   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:22:13.066829   30631 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-051105","uid":"614122aa-9203-4f41-a34b-07331562af09","resourceVersion":"975","creationTimestamp":"2024-02-29T18:06:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-051105","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-051105","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_07_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T18:06:59Z","fieldsType":"FieldsV1","fiel [truncated 6496 chars]
	I0229 18:22:13.067157   30631 pod_ready.go:92] pod "etcd-multinode-051105" in "kube-system" namespace has status "Ready":"True"
	I0229 18:22:13.067173   30631 pod_ready.go:81] duration metric: took 4.936686ms waiting for pod "etcd-multinode-051105" in "kube-system" namespace to be "Ready" ...
	I0229 18:22:13.067185   30631 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-051105" in "kube-system" namespace to be "Ready" ...
	I0229 18:22:13.067220   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-051105
	I0229 18:22:13.067227   30631 round_trippers.go:469] Request Headers:
	I0229 18:22:13.067233   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:22:13.067239   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:22:13.068872   30631 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0229 18:22:13.068895   30631 round_trippers.go:577] Response Headers:
	I0229 18:22:13.068902   30631 round_trippers.go:580]     Audit-Id: b6d6c5a1-d7ba-42da-a04e-4fb74c4c948f
	I0229 18:22:13.068907   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:22:13.068911   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:22:13.068915   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:22:13.068918   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:22:13.068923   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:22:13 GMT
	I0229 18:22:13.069473   30631 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-051105","namespace":"kube-system","uid":"722abb81-d303-4fa9-bcbb-8c16aaf4421d","resourceVersion":"925","creationTimestamp":"2024-02-29T18:07:02Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.200:8443","kubernetes.io/config.hash":"716aea331c832180bd818bead2d6fe09","kubernetes.io/config.mirror":"716aea331c832180bd818bead2d6fe09","kubernetes.io/config.seen":"2024-02-29T18:07:02.423715355Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-051105","uid":"614122aa-9203-4f41-a34b-07331562af09","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:07:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7351 chars]
	I0229 18:22:13.069890   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/multinode-051105
	I0229 18:22:13.069913   30631 round_trippers.go:469] Request Headers:
	I0229 18:22:13.069920   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:22:13.069925   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:22:13.071748   30631 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0229 18:22:13.071763   30631 round_trippers.go:577] Response Headers:
	I0229 18:22:13.071772   30631 round_trippers.go:580]     Audit-Id: 460a0fc3-393b-48e5-a661-d4db74cfeee2
	I0229 18:22:13.071779   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:22:13.071786   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:22:13.071790   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:22:13.071802   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:22:13.071806   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:22:13 GMT
	I0229 18:22:13.072221   30631 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-051105","uid":"614122aa-9203-4f41-a34b-07331562af09","resourceVersion":"975","creationTimestamp":"2024-02-29T18:06:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-051105","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-051105","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_07_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T18:06:59Z","fieldsType":"FieldsV1","fiel [truncated 6496 chars]
	I0229 18:22:13.072513   30631 pod_ready.go:92] pod "kube-apiserver-multinode-051105" in "kube-system" namespace has status "Ready":"True"
	I0229 18:22:13.072528   30631 pod_ready.go:81] duration metric: took 5.338014ms waiting for pod "kube-apiserver-multinode-051105" in "kube-system" namespace to be "Ready" ...
	I0229 18:22:13.072535   30631 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-051105" in "kube-system" namespace to be "Ready" ...
	I0229 18:22:13.072570   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-051105
	I0229 18:22:13.072578   30631 round_trippers.go:469] Request Headers:
	I0229 18:22:13.072584   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:22:13.072588   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:22:13.074110   30631 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0229 18:22:13.074123   30631 round_trippers.go:577] Response Headers:
	I0229 18:22:13.074129   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:22:13.074133   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:22:13.074135   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:22:13.074139   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:22:13.074142   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:22:13 GMT
	I0229 18:22:13.074146   30631 round_trippers.go:580]     Audit-Id: a1e3a769-96c8-4c36-b85b-4746560a6755
	I0229 18:22:13.074412   30631 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-051105","namespace":"kube-system","uid":"a3156cba-a585-47c6-8b26-2069af0021ce","resourceVersion":"929","creationTimestamp":"2024-02-29T18:07:00Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"12776d77f75f6cff787ef977dae61db7","kubernetes.io/config.mirror":"12776d77f75f6cff787ef977dae61db7","kubernetes.io/config.seen":"2024-02-29T18:06:55.285572192Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-051105","uid":"614122aa-9203-4f41-a34b-07331562af09","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:07:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6907 chars]
	I0229 18:22:13.074812   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/multinode-051105
	I0229 18:22:13.074827   30631 round_trippers.go:469] Request Headers:
	I0229 18:22:13.074837   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:22:13.074845   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:22:13.076400   30631 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0229 18:22:13.076417   30631 round_trippers.go:577] Response Headers:
	I0229 18:22:13.076426   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:22:13.076430   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:22:13 GMT
	I0229 18:22:13.076435   30631 round_trippers.go:580]     Audit-Id: 2dd96257-7aae-4fe6-86ec-eec5d05d1861
	I0229 18:22:13.076444   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:22:13.076448   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:22:13.076452   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:22:13.076861   30631 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-051105","uid":"614122aa-9203-4f41-a34b-07331562af09","resourceVersion":"975","creationTimestamp":"2024-02-29T18:06:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-051105","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-051105","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_07_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T18:06:59Z","fieldsType":"FieldsV1","fiel [truncated 6496 chars]
	I0229 18:22:13.077236   30631 pod_ready.go:92] pod "kube-controller-manager-multinode-051105" in "kube-system" namespace has status "Ready":"True"
	I0229 18:22:13.077253   30631 pod_ready.go:81] duration metric: took 4.711124ms waiting for pod "kube-controller-manager-multinode-051105" in "kube-system" namespace to be "Ready" ...
	I0229 18:22:13.077263   30631 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cbl8s" in "kube-system" namespace to be "Ready" ...
	I0229 18:22:13.239317   30631 request.go:629] Waited for 161.981654ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cbl8s
	I0229 18:22:13.239385   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cbl8s
	I0229 18:22:13.239393   30631 round_trippers.go:469] Request Headers:
	I0229 18:22:13.239401   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:22:13.239406   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:22:13.242540   30631 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 18:22:13.242562   30631 round_trippers.go:577] Response Headers:
	I0229 18:22:13.242573   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:22:13.242579   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:22:13 GMT
	I0229 18:22:13.242585   30631 round_trippers.go:580]     Audit-Id: a1077f9b-84ab-4c85-b933-b8c78814bfa6
	I0229 18:22:13.242591   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:22:13.242595   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:22:13.242600   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:22:13.243127   30631 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-cbl8s","generateName":"kube-proxy-","namespace":"kube-system","uid":"352ba5ff-0a79-4766-8a3f-a0860aad1b91","resourceVersion":"1132","creationTimestamp":"2024-02-29T18:09:08Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"811deb55-d749-4c76-9949-4d9e40cf5500","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:09:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"811deb55-d749-4c76-9949-4d9e40cf5500\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5697 chars]
	I0229 18:22:13.439333   30631 request.go:629] Waited for 195.83113ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/nodes/multinode-051105-m02
	I0229 18:22:13.439380   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/multinode-051105-m02
	I0229 18:22:13.439385   30631 round_trippers.go:469] Request Headers:
	I0229 18:22:13.439400   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:22:13.439407   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:22:13.442768   30631 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 18:22:13.442793   30631 round_trippers.go:577] Response Headers:
	I0229 18:22:13.442802   30631 round_trippers.go:580]     Audit-Id: 7d77dd5e-0b60-43c3-9423-1a4e75e24052
	I0229 18:22:13.442807   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:22:13.442811   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:22:13.442815   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:22:13.442821   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:22:13.442826   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:22:13 GMT
	I0229 18:22:13.442966   30631 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-051105-m02","uid":"ccce9f48-c73f-4045-b0aa-ccc8f0ee366c","resourceVersion":"1294","creationTimestamp":"2024-02-29T18:20:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-051105-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-051105","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T18_22_12_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:20:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3994 chars]
	I0229 18:22:13.443229   30631 pod_ready.go:92] pod "kube-proxy-cbl8s" in "kube-system" namespace has status "Ready":"True"
	I0229 18:22:13.443244   30631 pod_ready.go:81] duration metric: took 365.972959ms waiting for pod "kube-proxy-cbl8s" in "kube-system" namespace to be "Ready" ...
	I0229 18:22:13.443254   30631 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jfw9f" in "kube-system" namespace to be "Ready" ...
	I0229 18:22:13.638814   30631 request.go:629] Waited for 195.488128ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jfw9f
	I0229 18:22:13.638866   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jfw9f
	I0229 18:22:13.638873   30631 round_trippers.go:469] Request Headers:
	I0229 18:22:13.638882   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:22:13.638887   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:22:13.641465   30631 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:22:13.641483   30631 round_trippers.go:577] Response Headers:
	I0229 18:22:13.641489   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:22:13.641495   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:22:13.641502   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:22:13.641505   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:22:13 GMT
	I0229 18:22:13.641511   30631 round_trippers.go:580]     Audit-Id: a2a25eb5-d6e2-4832-b77a-072d3badcb6f
	I0229 18:22:13.641516   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:22:13.641674   30631 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-jfw9f","generateName":"kube-proxy-","namespace":"kube-system","uid":"45e1b79c-2d6b-4169-a6f0-a3949ec4bc6f","resourceVersion":"1301","creationTimestamp":"2024-02-29T18:09:56Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"811deb55-d749-4c76-9949-4d9e40cf5500","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:09:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"811deb55-d749-4c76-9949-4d9e40cf5500\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5883 chars]
	I0229 18:22:13.839543   30631 request.go:629] Waited for 197.380775ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/nodes/multinode-051105-m03
	I0229 18:22:13.839648   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/multinode-051105-m03
	I0229 18:22:13.839661   30631 round_trippers.go:469] Request Headers:
	I0229 18:22:13.839673   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:22:13.839678   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:22:13.842091   30631 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:22:13.842114   30631 round_trippers.go:577] Response Headers:
	I0229 18:22:13.842121   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:22:13 GMT
	I0229 18:22:13.842124   30631 round_trippers.go:580]     Audit-Id: 97541749-5d5f-42fe-82e5-e2b6c96d6af2
	I0229 18:22:13.842126   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:22:13.842129   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:22:13.842131   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:22:13.842134   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:22:13.842301   30631 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-051105-m03","uid":"c8b0486c-a69d-4afd-ae48-91e2c6509d77","resourceVersion":"1295","creationTimestamp":"2024-02-29T18:22:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-051105-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-051105","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T18_22_12_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:22:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 4222 chars]
	I0229 18:22:14.038850   30631 request.go:629] Waited for 95.282677ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jfw9f
	I0229 18:22:14.038902   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jfw9f
	I0229 18:22:14.038908   30631 round_trippers.go:469] Request Headers:
	I0229 18:22:14.038915   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:22:14.038919   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:22:14.041573   30631 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:22:14.041594   30631 round_trippers.go:577] Response Headers:
	I0229 18:22:14.041603   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:22:14 GMT
	I0229 18:22:14.041609   30631 round_trippers.go:580]     Audit-Id: 0dcb510e-37bf-43c9-a5f3-bcf83107b7ba
	I0229 18:22:14.041614   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:22:14.041618   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:22:14.041621   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:22:14.041625   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:22:14.042063   30631 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-jfw9f","generateName":"kube-proxy-","namespace":"kube-system","uid":"45e1b79c-2d6b-4169-a6f0-a3949ec4bc6f","resourceVersion":"1315","creationTimestamp":"2024-02-29T18:09:56Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"811deb55-d749-4c76-9949-4d9e40cf5500","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:09:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"811deb55-d749-4c76-9949-4d9e40cf5500\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5693 chars]
	I0229 18:22:14.238688   30631 request.go:629] Waited for 196.230834ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/nodes/multinode-051105-m03
	I0229 18:22:14.238740   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/multinode-051105-m03
	I0229 18:22:14.238744   30631 round_trippers.go:469] Request Headers:
	I0229 18:22:14.238752   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:22:14.238755   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:22:14.241550   30631 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:22:14.241569   30631 round_trippers.go:577] Response Headers:
	I0229 18:22:14.241581   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:22:14.241588   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:22:14.241592   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:22:14 GMT
	I0229 18:22:14.241595   30631 round_trippers.go:580]     Audit-Id: 2788ec8e-f9ae-4c19-b04a-dfdca13937ef
	I0229 18:22:14.241600   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:22:14.241607   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:22:14.241900   30631 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-051105-m03","uid":"c8b0486c-a69d-4afd-ae48-91e2c6509d77","resourceVersion":"1295","creationTimestamp":"2024-02-29T18:22:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-051105-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-051105","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T18_22_12_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:22:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 4222 chars]
	I0229 18:22:14.242261   30631 pod_ready.go:92] pod "kube-proxy-jfw9f" in "kube-system" namespace has status "Ready":"True"
	I0229 18:22:14.242281   30631 pod_ready.go:81] duration metric: took 799.020966ms waiting for pod "kube-proxy-jfw9f" in "kube-system" namespace to be "Ready" ...
	I0229 18:22:14.242294   30631 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wvhlx" in "kube-system" namespace to be "Ready" ...
	I0229 18:22:14.438661   30631 request.go:629] Waited for 196.305925ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wvhlx
	I0229 18:22:14.438710   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wvhlx
	I0229 18:22:14.438744   30631 round_trippers.go:469] Request Headers:
	I0229 18:22:14.438763   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:22:14.438772   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:22:14.442288   30631 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 18:22:14.442308   30631 round_trippers.go:577] Response Headers:
	I0229 18:22:14.442316   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:22:14.442320   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:22:14.442324   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:22:14.442330   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:22:14.442337   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:22:14 GMT
	I0229 18:22:14.442342   30631 round_trippers.go:580]     Audit-Id: d506f295-468e-45b7-a4ba-d3bbb9556bac
	I0229 18:22:14.442661   30631 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-wvhlx","generateName":"kube-proxy-","namespace":"kube-system","uid":"5548dfdd-2cda-48bc-9359-95eda53437d4","resourceVersion":"814","creationTimestamp":"2024-02-29T18:07:14Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"811deb55-d749-4c76-9949-4d9e40cf5500","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:07:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"811deb55-d749-4c76-9949-4d9e40cf5500\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5484 chars]
	I0229 18:22:14.639535   30631 request.go:629] Waited for 196.350838ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/nodes/multinode-051105
	I0229 18:22:14.639594   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/multinode-051105
	I0229 18:22:14.639599   30631 round_trippers.go:469] Request Headers:
	I0229 18:22:14.639607   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:22:14.639610   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:22:14.642786   30631 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 18:22:14.642803   30631 round_trippers.go:577] Response Headers:
	I0229 18:22:14.642810   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:22:14.642813   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:22:14 GMT
	I0229 18:22:14.642815   30631 round_trippers.go:580]     Audit-Id: 38b09772-369e-4b5a-aee0-5801ac71827e
	I0229 18:22:14.642819   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:22:14.642821   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:22:14.642824   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:22:14.643242   30631 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-051105","uid":"614122aa-9203-4f41-a34b-07331562af09","resourceVersion":"975","creationTimestamp":"2024-02-29T18:06:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-051105","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-051105","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_07_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T18:06:59Z","fieldsType":"FieldsV1","fiel [truncated 6496 chars]
	I0229 18:22:14.643536   30631 pod_ready.go:92] pod "kube-proxy-wvhlx" in "kube-system" namespace has status "Ready":"True"
	I0229 18:22:14.643550   30631 pod_ready.go:81] duration metric: took 401.248786ms waiting for pod "kube-proxy-wvhlx" in "kube-system" namespace to be "Ready" ...
	I0229 18:22:14.643558   30631 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-051105" in "kube-system" namespace to be "Ready" ...
	I0229 18:22:14.839641   30631 request.go:629] Waited for 196.013121ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-051105
	I0229 18:22:14.839719   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-051105
	I0229 18:22:14.839726   30631 round_trippers.go:469] Request Headers:
	I0229 18:22:14.839738   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:22:14.839745   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:22:14.843119   30631 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 18:22:14.843138   30631 round_trippers.go:577] Response Headers:
	I0229 18:22:14.843148   30631 round_trippers.go:580]     Audit-Id: d18b7128-51f6-4ee8-8b1b-fe415b859fc6
	I0229 18:22:14.843156   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:22:14.843161   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:22:14.843164   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:22:14.843169   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:22:14.843173   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:22:14 GMT
	I0229 18:22:14.843584   30631 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-051105","namespace":"kube-system","uid":"de579522-4a2a-4a66-86f0-8fd37603bb85","resourceVersion":"949","creationTimestamp":"2024-02-29T18:07:00Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"16c1e8bd6ccedfe92575733385fa4d81","kubernetes.io/config.mirror":"16c1e8bd6ccedfe92575733385fa4d81","kubernetes.io/config.seen":"2024-02-29T18:06:55.285517129Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-051105","uid":"614122aa-9203-4f41-a34b-07331562af09","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:07:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4646 chars]
	I0229 18:22:15.039255   30631 request.go:629] Waited for 195.356187ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/nodes/multinode-051105
	I0229 18:22:15.039335   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/multinode-051105
	I0229 18:22:15.039343   30631 round_trippers.go:469] Request Headers:
	I0229 18:22:15.039351   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:22:15.039358   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:22:15.042894   30631 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 18:22:15.042913   30631 round_trippers.go:577] Response Headers:
	I0229 18:22:15.042923   30631 round_trippers.go:580]     Audit-Id: 241954ad-f5b6-4222-8dfc-5101113bc8e2
	I0229 18:22:15.042928   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:22:15.042932   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:22:15.042937   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:22:15.042947   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:22:15.042960   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:22:15 GMT
	I0229 18:22:15.043343   30631 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-051105","uid":"614122aa-9203-4f41-a34b-07331562af09","resourceVersion":"975","creationTimestamp":"2024-02-29T18:06:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-051105","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-051105","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_07_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T18:06:59Z","fieldsType":"FieldsV1","fiel [truncated 6496 chars]
	I0229 18:22:15.043637   30631 pod_ready.go:92] pod "kube-scheduler-multinode-051105" in "kube-system" namespace has status "Ready":"True"
	I0229 18:22:15.043650   30631 pod_ready.go:81] duration metric: took 400.086567ms waiting for pod "kube-scheduler-multinode-051105" in "kube-system" namespace to be "Ready" ...
	I0229 18:22:15.043659   30631 pod_ready.go:38] duration metric: took 2.001371425s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 18:22:15.043672   30631 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 18:22:15.043714   30631 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 18:22:15.061414   30631 system_svc.go:56] duration metric: took 17.735473ms WaitForService to wait for kubelet.
	I0229 18:22:15.061437   30631 kubeadm.go:581] duration metric: took 2.042832739s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 18:22:15.061456   30631 node_conditions.go:102] verifying NodePressure condition ...
	I0229 18:22:15.238764   30631 request.go:629] Waited for 177.247605ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/nodes
	I0229 18:22:15.238813   30631 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes
	I0229 18:22:15.238818   30631 round_trippers.go:469] Request Headers:
	I0229 18:22:15.238825   30631 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:22:15.238828   30631 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 18:22:15.241595   30631 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:22:15.241613   30631 round_trippers.go:577] Response Headers:
	I0229 18:22:15.241620   30631 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55af0001-5093-4d88-ae45-6def34e8f5ed
	I0229 18:22:15.241626   30631 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:22:15 GMT
	I0229 18:22:15.241631   30631 round_trippers.go:580]     Audit-Id: b97fe052-0021-4636-b83f-146b0649cee0
	I0229 18:22:15.241638   30631 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:22:15.241643   30631 round_trippers.go:580]     Content-Type: application/json
	I0229 18:22:15.241650   30631 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d19784-9a2e-40f4-9b20-e0a3038f557c
	I0229 18:22:15.242168   30631 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1318"},"items":[{"metadata":{"name":"multinode-051105","uid":"614122aa-9203-4f41-a34b-07331562af09","resourceVersion":"975","creationTimestamp":"2024-02-29T18:06:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-051105","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-051105","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_07_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedField
s":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time": [truncated 16750 chars]
	I0229 18:22:15.242746   30631 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 18:22:15.242763   30631 node_conditions.go:123] node cpu capacity is 2
	I0229 18:22:15.242773   30631 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 18:22:15.242776   30631 node_conditions.go:123] node cpu capacity is 2
	I0229 18:22:15.242782   30631 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 18:22:15.242789   30631 node_conditions.go:123] node cpu capacity is 2
	I0229 18:22:15.242799   30631 node_conditions.go:105] duration metric: took 181.338185ms to run NodePressure ...
	I0229 18:22:15.242814   30631 start.go:228] waiting for startup goroutines ...
	I0229 18:22:15.242831   30631 start.go:242] writing updated cluster config ...
	I0229 18:22:15.243147   30631 ssh_runner.go:195] Run: rm -f paused
	I0229 18:22:15.291383   30631 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0229 18:22:15.294050   30631 out.go:177] * Done! kubectl is now configured to use "multinode-051105" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Feb 29 18:22:16 multinode-051105 crio[666]: time="2024-02-29 18:22:16.402530892Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709230936402505329,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134903,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=049b2832-92da-4eb7-b5bc-5fba3eead36d name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 18:22:16 multinode-051105 crio[666]: time="2024-02-29 18:22:16.403170696Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=95f73d3a-fb2f-4adb-a964-b7713a77b56f name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 18:22:16 multinode-051105 crio[666]: time="2024-02-29 18:22:16.403220474Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=95f73d3a-fb2f-4adb-a964-b7713a77b56f name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 18:22:16 multinode-051105 crio[666]: time="2024-02-29 18:22:16.403485855Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b5db2c189060474cd4df8e693da42c742b78103357b29b69ad5b7f71b243232b,PodSandboxId:5af310fc6e7dea55843dc533430baf5248e119075bcc4f8522c4d17af914ef03,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709230738549352286,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40d74dfd-e4ca-4a17-bed1-24ab6dfd37b4,},Annotations:map[string]string{io.kubernetes.container.hash: 68bafaae,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7a7ce92024d85c7ed126f0f1471a111175b7d6569af8afce02f93d18eeb9a09,PodSandboxId:df3f40e9945d8d7e0e10ebf91dd1f40b8f3e62650959a5dfce73c0f671f03bfa,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1709230726154082566,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-dl8t4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 46a6e5a5-9f89-4d6b-9558-553aab29a151,},Annotations:map[string]string{io.kubernetes.container.hash: fb2cbf8,io.kubernetes.container.restartCount: 1,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd9705351914a55766b7bade6123365c0c2567383dcede5cb1aa1b404dcd6a40,PodSandboxId:c91a16df90acd9e0d5d231c0dfde982b2b4e0f413abd19a5c39d6c96e392df6f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709230723322336390,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-bwhnb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3853502-49ad-4d24-8c63-3000e4f4aa8e,},Annotations:map[string]string{io.kubernetes.container.hash: 89099dcc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"
},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b74f2547fc1a0655d802b9a0bf5dd996de40bc255f05f0fe02b44a928afb122,PodSandboxId:369a1345fb19085d897610cd6127b3f1533d19cfbf124f3cd5feec5d9bc69bdf,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1709230712177300852,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-r2q5q,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: 4cdb5152-fbe1-4c9c-88ac-ec1fa682f3d9,},Annotations:map[string]string{io.kubernetes.container.hash: 5141284e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b17238dcb7b4118e9bee9b92dde05f124b6f128e39fb1f7c5a71436eb14fb89f,PodSandboxId:7ca92e409eca1e858d09960961dd45484cb2c1956d52f71f2de8c3e4f5b43286,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709230707817507767,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wvhlx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5548df
dd-2cda-48bc-9359-95eda53437d4,},Annotations:map[string]string{io.kubernetes.container.hash: 203cdaa8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:019774951b46486ab3968f4810b1ff3d3b26e23a2541517664ee4cc3e1d9cd1f,PodSandboxId:5af310fc6e7dea55843dc533430baf5248e119075bcc4f8522c4d17af914ef03,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1709230707821466165,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40d74dfd-e4ca-4a17-
bed1-24ab6dfd37b4,},Annotations:map[string]string{io.kubernetes.container.hash: 68bafaae,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a67822f33e6a3172211dc238404728123c7cbb2022efc3321523c8138e2a58bb,PodSandboxId:af538972f0841e1b1ab0b67c3b5897a5a2764a7a753c9816a1ee75d4a1e93fdb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709230704080943096,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-051105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16c1e8bd6ccedfe92575733385fa4
d81,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28a6d2c5a9abf9f89f85c9b2c681f1c562d559b8ed0e0be087f80f4a1a6e6bfc,PodSandboxId:bcf7f312cb295758fd4a2224d2811051845bace8ad126fb9d653324ae019f4ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709230704114932462,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-051105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3ee17954369c56d68a333413809975f,},Annotations:map[string]string{io.kuber
netes.container.hash: 77b4caba,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fcb146cde6242341bbd1019b6f7eadbcd080f92d7db2c9b8194f1cdfe6b4653,PodSandboxId:a3015b777a324415ea5ca88c6f452ae8049bbe3e5275d6f6860581b9f1a27259,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709230704030363132,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-051105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 716aea331c832180bd818bead2d6fe09,},Annotations:map[string]string{io.kubernetes.containe
r.hash: 44e555a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d35170f1ce10a27f3459825ce5733e3c6a8096489dd2a7ca80927ea17b79b9b4,PodSandboxId:af7e3f3d7adc9c5660d2020b193c0159c4c344c24292f3a9e8b4be1e22c6cc8a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709230703976983216,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-051105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12776d77f75f6cff787ef977dae61db7,},Annotations:map[string]string{io.kuberne
tes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=95f73d3a-fb2f-4adb-a964-b7713a77b56f name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 18:22:16 multinode-051105 crio[666]: time="2024-02-29 18:22:16.447436110Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5a6d5fa8-ab5e-4b00-b91a-22eb7cbce3f4 name=/runtime.v1.RuntimeService/Version
	Feb 29 18:22:16 multinode-051105 crio[666]: time="2024-02-29 18:22:16.447514544Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5a6d5fa8-ab5e-4b00-b91a-22eb7cbce3f4 name=/runtime.v1.RuntimeService/Version
	Feb 29 18:22:16 multinode-051105 crio[666]: time="2024-02-29 18:22:16.449179340Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=943e6017-6bb8-402a-bbcf-3d8dd0a1041a name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 18:22:16 multinode-051105 crio[666]: time="2024-02-29 18:22:16.449690741Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709230936449663089,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134903,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=943e6017-6bb8-402a-bbcf-3d8dd0a1041a name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 18:22:16 multinode-051105 crio[666]: time="2024-02-29 18:22:16.450447014Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7fd47d46-533a-4479-ab16-23cb7ff9335b name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 18:22:16 multinode-051105 crio[666]: time="2024-02-29 18:22:16.450496173Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7fd47d46-533a-4479-ab16-23cb7ff9335b name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 18:22:16 multinode-051105 crio[666]: time="2024-02-29 18:22:16.450800751Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b5db2c189060474cd4df8e693da42c742b78103357b29b69ad5b7f71b243232b,PodSandboxId:5af310fc6e7dea55843dc533430baf5248e119075bcc4f8522c4d17af914ef03,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709230738549352286,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40d74dfd-e4ca-4a17-bed1-24ab6dfd37b4,},Annotations:map[string]string{io.kubernetes.container.hash: 68bafaae,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7a7ce92024d85c7ed126f0f1471a111175b7d6569af8afce02f93d18eeb9a09,PodSandboxId:df3f40e9945d8d7e0e10ebf91dd1f40b8f3e62650959a5dfce73c0f671f03bfa,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1709230726154082566,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-dl8t4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 46a6e5a5-9f89-4d6b-9558-553aab29a151,},Annotations:map[string]string{io.kubernetes.container.hash: fb2cbf8,io.kubernetes.container.restartCount: 1,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd9705351914a55766b7bade6123365c0c2567383dcede5cb1aa1b404dcd6a40,PodSandboxId:c91a16df90acd9e0d5d231c0dfde982b2b4e0f413abd19a5c39d6c96e392df6f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709230723322336390,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-bwhnb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3853502-49ad-4d24-8c63-3000e4f4aa8e,},Annotations:map[string]string{io.kubernetes.container.hash: 89099dcc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"
},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b74f2547fc1a0655d802b9a0bf5dd996de40bc255f05f0fe02b44a928afb122,PodSandboxId:369a1345fb19085d897610cd6127b3f1533d19cfbf124f3cd5feec5d9bc69bdf,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1709230712177300852,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-r2q5q,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: 4cdb5152-fbe1-4c9c-88ac-ec1fa682f3d9,},Annotations:map[string]string{io.kubernetes.container.hash: 5141284e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b17238dcb7b4118e9bee9b92dde05f124b6f128e39fb1f7c5a71436eb14fb89f,PodSandboxId:7ca92e409eca1e858d09960961dd45484cb2c1956d52f71f2de8c3e4f5b43286,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709230707817507767,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wvhlx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5548df
dd-2cda-48bc-9359-95eda53437d4,},Annotations:map[string]string{io.kubernetes.container.hash: 203cdaa8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:019774951b46486ab3968f4810b1ff3d3b26e23a2541517664ee4cc3e1d9cd1f,PodSandboxId:5af310fc6e7dea55843dc533430baf5248e119075bcc4f8522c4d17af914ef03,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1709230707821466165,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40d74dfd-e4ca-4a17-
bed1-24ab6dfd37b4,},Annotations:map[string]string{io.kubernetes.container.hash: 68bafaae,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a67822f33e6a3172211dc238404728123c7cbb2022efc3321523c8138e2a58bb,PodSandboxId:af538972f0841e1b1ab0b67c3b5897a5a2764a7a753c9816a1ee75d4a1e93fdb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709230704080943096,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-051105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16c1e8bd6ccedfe92575733385fa4
d81,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28a6d2c5a9abf9f89f85c9b2c681f1c562d559b8ed0e0be087f80f4a1a6e6bfc,PodSandboxId:bcf7f312cb295758fd4a2224d2811051845bace8ad126fb9d653324ae019f4ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709230704114932462,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-051105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3ee17954369c56d68a333413809975f,},Annotations:map[string]string{io.kuber
netes.container.hash: 77b4caba,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fcb146cde6242341bbd1019b6f7eadbcd080f92d7db2c9b8194f1cdfe6b4653,PodSandboxId:a3015b777a324415ea5ca88c6f452ae8049bbe3e5275d6f6860581b9f1a27259,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709230704030363132,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-051105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 716aea331c832180bd818bead2d6fe09,},Annotations:map[string]string{io.kubernetes.containe
r.hash: 44e555a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d35170f1ce10a27f3459825ce5733e3c6a8096489dd2a7ca80927ea17b79b9b4,PodSandboxId:af7e3f3d7adc9c5660d2020b193c0159c4c344c24292f3a9e8b4be1e22c6cc8a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709230703976983216,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-051105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12776d77f75f6cff787ef977dae61db7,},Annotations:map[string]string{io.kuberne
tes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7fd47d46-533a-4479-ab16-23cb7ff9335b name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 18:22:16 multinode-051105 crio[666]: time="2024-02-29 18:22:16.495751770Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1392c437-1677-4d25-bef8-2a97b9305d0d name=/runtime.v1.RuntimeService/Version
	Feb 29 18:22:16 multinode-051105 crio[666]: time="2024-02-29 18:22:16.495824600Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1392c437-1677-4d25-bef8-2a97b9305d0d name=/runtime.v1.RuntimeService/Version
	Feb 29 18:22:16 multinode-051105 crio[666]: time="2024-02-29 18:22:16.497200105Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=808e702b-52a3-4357-90bd-e22fd0b33382 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 18:22:16 multinode-051105 crio[666]: time="2024-02-29 18:22:16.497967103Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709230936497941058,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134903,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=808e702b-52a3-4357-90bd-e22fd0b33382 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 18:22:16 multinode-051105 crio[666]: time="2024-02-29 18:22:16.498652646Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=55cfae61-1eb1-40c4-bb25-001ae12e02a4 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 18:22:16 multinode-051105 crio[666]: time="2024-02-29 18:22:16.498707980Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=55cfae61-1eb1-40c4-bb25-001ae12e02a4 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 18:22:16 multinode-051105 crio[666]: time="2024-02-29 18:22:16.498915926Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b5db2c189060474cd4df8e693da42c742b78103357b29b69ad5b7f71b243232b,PodSandboxId:5af310fc6e7dea55843dc533430baf5248e119075bcc4f8522c4d17af914ef03,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709230738549352286,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40d74dfd-e4ca-4a17-bed1-24ab6dfd37b4,},Annotations:map[string]string{io.kubernetes.container.hash: 68bafaae,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7a7ce92024d85c7ed126f0f1471a111175b7d6569af8afce02f93d18eeb9a09,PodSandboxId:df3f40e9945d8d7e0e10ebf91dd1f40b8f3e62650959a5dfce73c0f671f03bfa,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1709230726154082566,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-dl8t4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 46a6e5a5-9f89-4d6b-9558-553aab29a151,},Annotations:map[string]string{io.kubernetes.container.hash: fb2cbf8,io.kubernetes.container.restartCount: 1,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd9705351914a55766b7bade6123365c0c2567383dcede5cb1aa1b404dcd6a40,PodSandboxId:c91a16df90acd9e0d5d231c0dfde982b2b4e0f413abd19a5c39d6c96e392df6f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709230723322336390,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-bwhnb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3853502-49ad-4d24-8c63-3000e4f4aa8e,},Annotations:map[string]string{io.kubernetes.container.hash: 89099dcc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"
},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b74f2547fc1a0655d802b9a0bf5dd996de40bc255f05f0fe02b44a928afb122,PodSandboxId:369a1345fb19085d897610cd6127b3f1533d19cfbf124f3cd5feec5d9bc69bdf,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1709230712177300852,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-r2q5q,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: 4cdb5152-fbe1-4c9c-88ac-ec1fa682f3d9,},Annotations:map[string]string{io.kubernetes.container.hash: 5141284e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b17238dcb7b4118e9bee9b92dde05f124b6f128e39fb1f7c5a71436eb14fb89f,PodSandboxId:7ca92e409eca1e858d09960961dd45484cb2c1956d52f71f2de8c3e4f5b43286,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709230707817507767,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wvhlx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5548df
dd-2cda-48bc-9359-95eda53437d4,},Annotations:map[string]string{io.kubernetes.container.hash: 203cdaa8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:019774951b46486ab3968f4810b1ff3d3b26e23a2541517664ee4cc3e1d9cd1f,PodSandboxId:5af310fc6e7dea55843dc533430baf5248e119075bcc4f8522c4d17af914ef03,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1709230707821466165,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40d74dfd-e4ca-4a17-
bed1-24ab6dfd37b4,},Annotations:map[string]string{io.kubernetes.container.hash: 68bafaae,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a67822f33e6a3172211dc238404728123c7cbb2022efc3321523c8138e2a58bb,PodSandboxId:af538972f0841e1b1ab0b67c3b5897a5a2764a7a753c9816a1ee75d4a1e93fdb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709230704080943096,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-051105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16c1e8bd6ccedfe92575733385fa4
d81,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28a6d2c5a9abf9f89f85c9b2c681f1c562d559b8ed0e0be087f80f4a1a6e6bfc,PodSandboxId:bcf7f312cb295758fd4a2224d2811051845bace8ad126fb9d653324ae019f4ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709230704114932462,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-051105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3ee17954369c56d68a333413809975f,},Annotations:map[string]string{io.kuber
netes.container.hash: 77b4caba,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fcb146cde6242341bbd1019b6f7eadbcd080f92d7db2c9b8194f1cdfe6b4653,PodSandboxId:a3015b777a324415ea5ca88c6f452ae8049bbe3e5275d6f6860581b9f1a27259,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709230704030363132,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-051105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 716aea331c832180bd818bead2d6fe09,},Annotations:map[string]string{io.kubernetes.containe
r.hash: 44e555a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d35170f1ce10a27f3459825ce5733e3c6a8096489dd2a7ca80927ea17b79b9b4,PodSandboxId:af7e3f3d7adc9c5660d2020b193c0159c4c344c24292f3a9e8b4be1e22c6cc8a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709230703976983216,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-051105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12776d77f75f6cff787ef977dae61db7,},Annotations:map[string]string{io.kuberne
tes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=55cfae61-1eb1-40c4-bb25-001ae12e02a4 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 18:22:16 multinode-051105 crio[666]: time="2024-02-29 18:22:16.541136835Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=84d1308a-e8a2-4ec0-83c9-30a4df979d8a name=/runtime.v1.RuntimeService/Version
	Feb 29 18:22:16 multinode-051105 crio[666]: time="2024-02-29 18:22:16.541211039Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=84d1308a-e8a2-4ec0-83c9-30a4df979d8a name=/runtime.v1.RuntimeService/Version
	Feb 29 18:22:16 multinode-051105 crio[666]: time="2024-02-29 18:22:16.542129007Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fa049702-f047-4a89-9e8b-9ced27aae547 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 18:22:16 multinode-051105 crio[666]: time="2024-02-29 18:22:16.542535404Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709230936542515762,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134903,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fa049702-f047-4a89-9e8b-9ced27aae547 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 18:22:16 multinode-051105 crio[666]: time="2024-02-29 18:22:16.543622281Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4593958d-ee34-4958-b3d8-336bd5023163 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 18:22:16 multinode-051105 crio[666]: time="2024-02-29 18:22:16.543675690Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4593958d-ee34-4958-b3d8-336bd5023163 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 18:22:16 multinode-051105 crio[666]: time="2024-02-29 18:22:16.543894917Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b5db2c189060474cd4df8e693da42c742b78103357b29b69ad5b7f71b243232b,PodSandboxId:5af310fc6e7dea55843dc533430baf5248e119075bcc4f8522c4d17af914ef03,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709230738549352286,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40d74dfd-e4ca-4a17-bed1-24ab6dfd37b4,},Annotations:map[string]string{io.kubernetes.container.hash: 68bafaae,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7a7ce92024d85c7ed126f0f1471a111175b7d6569af8afce02f93d18eeb9a09,PodSandboxId:df3f40e9945d8d7e0e10ebf91dd1f40b8f3e62650959a5dfce73c0f671f03bfa,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1709230726154082566,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-dl8t4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 46a6e5a5-9f89-4d6b-9558-553aab29a151,},Annotations:map[string]string{io.kubernetes.container.hash: fb2cbf8,io.kubernetes.container.restartCount: 1,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd9705351914a55766b7bade6123365c0c2567383dcede5cb1aa1b404dcd6a40,PodSandboxId:c91a16df90acd9e0d5d231c0dfde982b2b4e0f413abd19a5c39d6c96e392df6f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709230723322336390,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-bwhnb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3853502-49ad-4d24-8c63-3000e4f4aa8e,},Annotations:map[string]string{io.kubernetes.container.hash: 89099dcc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"
},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b74f2547fc1a0655d802b9a0bf5dd996de40bc255f05f0fe02b44a928afb122,PodSandboxId:369a1345fb19085d897610cd6127b3f1533d19cfbf124f3cd5feec5d9bc69bdf,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1709230712177300852,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-r2q5q,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: 4cdb5152-fbe1-4c9c-88ac-ec1fa682f3d9,},Annotations:map[string]string{io.kubernetes.container.hash: 5141284e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b17238dcb7b4118e9bee9b92dde05f124b6f128e39fb1f7c5a71436eb14fb89f,PodSandboxId:7ca92e409eca1e858d09960961dd45484cb2c1956d52f71f2de8c3e4f5b43286,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709230707817507767,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wvhlx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5548df
dd-2cda-48bc-9359-95eda53437d4,},Annotations:map[string]string{io.kubernetes.container.hash: 203cdaa8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:019774951b46486ab3968f4810b1ff3d3b26e23a2541517664ee4cc3e1d9cd1f,PodSandboxId:5af310fc6e7dea55843dc533430baf5248e119075bcc4f8522c4d17af914ef03,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1709230707821466165,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40d74dfd-e4ca-4a17-
bed1-24ab6dfd37b4,},Annotations:map[string]string{io.kubernetes.container.hash: 68bafaae,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a67822f33e6a3172211dc238404728123c7cbb2022efc3321523c8138e2a58bb,PodSandboxId:af538972f0841e1b1ab0b67c3b5897a5a2764a7a753c9816a1ee75d4a1e93fdb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709230704080943096,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-051105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16c1e8bd6ccedfe92575733385fa4
d81,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28a6d2c5a9abf9f89f85c9b2c681f1c562d559b8ed0e0be087f80f4a1a6e6bfc,PodSandboxId:bcf7f312cb295758fd4a2224d2811051845bace8ad126fb9d653324ae019f4ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709230704114932462,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-051105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3ee17954369c56d68a333413809975f,},Annotations:map[string]string{io.kuber
netes.container.hash: 77b4caba,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fcb146cde6242341bbd1019b6f7eadbcd080f92d7db2c9b8194f1cdfe6b4653,PodSandboxId:a3015b777a324415ea5ca88c6f452ae8049bbe3e5275d6f6860581b9f1a27259,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709230704030363132,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-051105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 716aea331c832180bd818bead2d6fe09,},Annotations:map[string]string{io.kubernetes.containe
r.hash: 44e555a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d35170f1ce10a27f3459825ce5733e3c6a8096489dd2a7ca80927ea17b79b9b4,PodSandboxId:af7e3f3d7adc9c5660d2020b193c0159c4c344c24292f3a9e8b4be1e22c6cc8a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709230703976983216,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-051105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12776d77f75f6cff787ef977dae61db7,},Annotations:map[string]string{io.kuberne
tes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4593958d-ee34-4958-b3d8-336bd5023163 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b5db2c1890604       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       2                   5af310fc6e7de       storage-provisioner
	a7a7ce92024d8       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   1                   df3f40e9945d8       busybox-5b5d89c9d6-dl8t4
	dd9705351914a       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      3 minutes ago       Running             coredns                   1                   c91a16df90acd       coredns-5dd5756b68-bwhnb
	2b74f2547fc1a       docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988    3 minutes ago       Running             kindnet-cni               1                   369a1345fb190       kindnet-r2q5q
	019774951b464       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Exited              storage-provisioner       1                   5af310fc6e7de       storage-provisioner
	b17238dcb7b41       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      3 minutes ago       Running             kube-proxy                1                   7ca92e409eca1       kube-proxy-wvhlx
	28a6d2c5a9abf       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      3 minutes ago       Running             etcd                      1                   bcf7f312cb295       etcd-multinode-051105
	a67822f33e6a3       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      3 minutes ago       Running             kube-scheduler            1                   af538972f0841       kube-scheduler-multinode-051105
	2fcb146cde624       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      3 minutes ago       Running             kube-apiserver            1                   a3015b777a324       kube-apiserver-multinode-051105
	d35170f1ce10a       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      3 minutes ago       Running             kube-controller-manager   1                   af7e3f3d7adc9       kube-controller-manager-multinode-051105
	
	
	==> coredns [dd9705351914a55766b7bade6123365c0c2567383dcede5cb1aa1b404dcd6a40] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:35026 - 8136 "HINFO IN 298777731306103645.1410900800579919378. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.021258126s
	
	
	==> describe nodes <==
	Name:               multinode-051105
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-051105
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f9e09574fdbf0719156a1e892f7aeb8b71f0cf19
	                    minikube.k8s.io/name=multinode-051105
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_29T18_07_03_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Feb 2024 18:06:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-051105
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Feb 2024 18:22:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Feb 2024 18:18:57 +0000   Thu, 29 Feb 2024 18:06:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Feb 2024 18:18:57 +0000   Thu, 29 Feb 2024 18:06:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Feb 2024 18:18:57 +0000   Thu, 29 Feb 2024 18:06:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Feb 2024 18:18:57 +0000   Thu, 29 Feb 2024 18:18:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.200
	  Hostname:    multinode-051105
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 52540e9be60d4db992b598fa948dc0d3
	  System UUID:                52540e9b-e60d-4db9-92b5-98fa948dc0d3
	  Boot ID:                    32b4b07d-52db-44bc-a61d-f8d1d4d9953c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-dl8t4                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 coredns-5dd5756b68-bwhnb                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-multinode-051105                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-r2q5q                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-multinode-051105             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-multinode-051105    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-wvhlx                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-multinode-051105             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 15m                    kube-proxy       
	  Normal  Starting                 3m48s                  kube-proxy       
	  Normal  Starting                 15m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node multinode-051105 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node multinode-051105 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node multinode-051105 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    15m                    kubelet          Node multinode-051105 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m                    kubelet          Node multinode-051105 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     15m                    kubelet          Node multinode-051105 status is now: NodeHasSufficientPID
	  Normal  Starting                 15m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           15m                    node-controller  Node multinode-051105 event: Registered Node multinode-051105 in Controller
	  Normal  NodeReady                14m                    kubelet          Node multinode-051105 status is now: NodeReady
	  Normal  Starting                 3m53s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m53s (x8 over 3m53s)  kubelet          Node multinode-051105 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m53s (x8 over 3m53s)  kubelet          Node multinode-051105 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m53s (x7 over 3m53s)  kubelet          Node multinode-051105 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m53s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m36s                  node-controller  Node multinode-051105 event: Registered Node multinode-051105 in Controller
	
	
	Name:               multinode-051105-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-051105-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f9e09574fdbf0719156a1e892f7aeb8b71f0cf19
	                    minikube.k8s.io/name=multinode-051105
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_02_29T18_22_12_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Feb 2024 18:20:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-051105-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Feb 2024 18:22:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Feb 2024 18:20:24 +0000   Thu, 29 Feb 2024 18:20:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Feb 2024 18:20:24 +0000   Thu, 29 Feb 2024 18:20:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Feb 2024 18:20:24 +0000   Thu, 29 Feb 2024 18:20:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Feb 2024 18:20:24 +0000   Thu, 29 Feb 2024 18:20:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.104
	  Hostname:    multinode-051105-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 e0298699174c46db9d5a1a8b334593ef
	  System UUID:                e0298699-174c-46db-9d5a-1a8b334593ef
	  Boot ID:                    5981e1ff-1835-4328-8a2f-c5f111eed97d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-ptxqq    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 kindnet-c2ztr               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-cbl8s            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From        Message
	  ----     ------                   ----                   ----        -------
	  Normal   Starting                 13m                    kube-proxy  
	  Normal   Starting                 110s                   kube-proxy  
	  Normal   NodeHasSufficientMemory  13m (x5 over 13m)      kubelet     Node multinode-051105-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x5 over 13m)      kubelet     Node multinode-051105-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x5 over 13m)      kubelet     Node multinode-051105-m02 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             2m55s                  kubelet     Node multinode-051105-m02 status is now: NodeNotReady
	  Warning  ContainerGCFailed        2m10s (x2 over 3m10s)  kubelet     rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotSchedulable       114s                   kubelet     Node multinode-051105-m02 status is now: NodeNotSchedulable
	  Normal   NodeReady                114s (x2 over 12m)     kubelet     Node multinode-051105-m02 status is now: NodeReady
	  Normal   Starting                 112s                   kubelet     Starting kubelet.
	  Normal   NodeHasSufficientMemory  112s (x2 over 112s)    kubelet     Node multinode-051105-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    112s (x2 over 112s)    kubelet     Node multinode-051105-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     112s (x2 over 112s)    kubelet     Node multinode-051105-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  112s                   kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeReady                112s                   kubelet     Node multinode-051105-m02 status is now: NodeReady
	
	
	Name:               multinode-051105-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-051105-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f9e09574fdbf0719156a1e892f7aeb8b71f0cf19
	                    minikube.k8s.io/name=multinode-051105
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_02_29T18_22_12_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Feb 2024 18:22:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:              Failed to get lease: leases.coordination.k8s.io "multinode-051105-m03" not found
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Feb 2024 18:22:12 +0000   Thu, 29 Feb 2024 18:22:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Feb 2024 18:22:12 +0000   Thu, 29 Feb 2024 18:22:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Feb 2024 18:22:12 +0000   Thu, 29 Feb 2024 18:22:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Feb 2024 18:22:12 +0000   Thu, 29 Feb 2024 18:22:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.78
	  Hostname:    multinode-051105-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 86e1fda4b96844a7a90e3cea748ee786
	  System UUID:                86e1fda4-b968-44a7-a90e-3cea748ee786
	  Boot ID:                    747ef5e9-56cc-45dd-b514-5a07c4e7ff3f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-25wqb    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         116s
	  kube-system                 kindnet-kvkf2               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-jfw9f            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                 From        Message
	  ----     ------                   ----                ----        -------
	  Normal   Starting                 11m                 kube-proxy  
	  Normal   Starting                 12m                 kube-proxy  
	  Normal   Starting                 3s                  kube-proxy  
	  Normal   NodeHasNoDiskPressure    12m (x5 over 12m)   kubelet     Node multinode-051105-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x5 over 12m)   kubelet     Node multinode-051105-m03 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  12m (x5 over 12m)   kubelet     Node multinode-051105-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeReady                12m                 kubelet     Node multinode-051105-m03 status is now: NodeReady
	  Normal   Starting                 11m                 kubelet     Starting kubelet.
	  Normal   NodeAllocatableEnforced  11m                 kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  11m (x2 over 11m)   kubelet     Node multinode-051105-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x2 over 11m)   kubelet     Node multinode-051105-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x2 over 11m)   kubelet     Node multinode-051105-m03 status is now: NodeHasSufficientPID
	  Normal   NodeReady                11m                 kubelet     Node multinode-051105-m03 status is now: NodeReady
	  Normal   NodeNotReady             74s                 kubelet     Node multinode-051105-m03 status is now: NodeNotReady
	  Warning  ContainerGCFailed        40s (x2 over 100s)  kubelet     rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   Starting                 5s                  kubelet     Starting kubelet.
	  Normal   NodeHasNoDiskPressure    5s (x2 over 5s)     kubelet     Node multinode-051105-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5s (x2 over 5s)     kubelet     Node multinode-051105-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  5s                  kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  5s (x2 over 5s)     kubelet     Node multinode-051105-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeReady                4s                  kubelet     Node multinode-051105-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[Feb29 18:17] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052840] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043499] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.528959] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.334938] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.703829] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Feb29 18:18] systemd-fstab-generator[587]: Ignoring "noauto" option for root device
	[  +0.055492] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059433] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.219778] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.127950] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.249904] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[ +17.370531] systemd-fstab-generator[868]: Ignoring "noauto" option for root device
	[  +0.055586] kauditd_printk_skb: 130 callbacks suppressed
	[  +5.019464] kauditd_printk_skb: 94 callbacks suppressed
	[  +4.999196] kauditd_printk_skb: 23 callbacks suppressed
	[ +13.118233] kauditd_printk_skb: 11 callbacks suppressed
	
	
	==> etcd [28a6d2c5a9abf9f89f85c9b2c681f1c562d559b8ed0e0be087f80f4a1a6e6bfc] <==
	{"level":"info","ts":"2024-02-29T18:18:24.771956Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-29T18:18:24.771982Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-29T18:18:24.772275Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe8c4457455e3a5 switched to configuration voters=(1146381907749364645)"}
	{"level":"info","ts":"2024-02-29T18:18:24.772345Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"1d37198946ef4128","local-member-id":"fe8c4457455e3a5","added-peer-id":"fe8c4457455e3a5","added-peer-peer-urls":["https://192.168.39.200:2380"]}
	{"level":"info","ts":"2024-02-29T18:18:24.77245Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1d37198946ef4128","local-member-id":"fe8c4457455e3a5","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T18:18:24.772982Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-02-29T18:18:24.773429Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"fe8c4457455e3a5","initial-advertise-peer-urls":["https://192.168.39.200:2380"],"listen-peer-urls":["https://192.168.39.200:2380"],"advertise-client-urls":["https://192.168.39.200:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.200:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-02-29T18:18:24.773477Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-02-29T18:18:24.773004Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.200:2380"}
	{"level":"info","ts":"2024-02-29T18:18:24.783831Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.200:2380"}
	{"level":"info","ts":"2024-02-29T18:18:24.783984Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T18:18:25.832676Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe8c4457455e3a5 is starting a new election at term 2"}
	{"level":"info","ts":"2024-02-29T18:18:25.832753Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe8c4457455e3a5 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-02-29T18:18:25.832795Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe8c4457455e3a5 received MsgPreVoteResp from fe8c4457455e3a5 at term 2"}
	{"level":"info","ts":"2024-02-29T18:18:25.832813Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe8c4457455e3a5 became candidate at term 3"}
	{"level":"info","ts":"2024-02-29T18:18:25.832821Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe8c4457455e3a5 received MsgVoteResp from fe8c4457455e3a5 at term 3"}
	{"level":"info","ts":"2024-02-29T18:18:25.832834Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe8c4457455e3a5 became leader at term 3"}
	{"level":"info","ts":"2024-02-29T18:18:25.832844Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: fe8c4457455e3a5 elected leader fe8c4457455e3a5 at term 3"}
	{"level":"info","ts":"2024-02-29T18:18:25.834394Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"fe8c4457455e3a5","local-member-attributes":"{Name:multinode-051105 ClientURLs:[https://192.168.39.200:2379]}","request-path":"/0/members/fe8c4457455e3a5/attributes","cluster-id":"1d37198946ef4128","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-29T18:18:25.834651Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T18:18:25.834777Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T18:18:25.835984Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-29T18:18:25.836215Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-29T18:18:25.836251Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-29T18:18:25.835997Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.200:2379"}
	
	
	==> kernel <==
	 18:22:16 up 4 min,  0 users,  load average: 0.52, 0.32, 0.14
	Linux multinode-051105 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [2b74f2547fc1a0655d802b9a0bf5dd996de40bc255f05f0fe02b44a928afb122] <==
	I0229 18:21:43.259823       1 main.go:223] Handling node with IPs: map[192.168.39.200:{}]
	I0229 18:21:43.259995       1 main.go:227] handling current node
	I0229 18:21:43.260005       1 main.go:223] Handling node with IPs: map[192.168.39.104:{}]
	I0229 18:21:43.260011       1 main.go:250] Node multinode-051105-m02 has CIDR [10.244.1.0/24] 
	I0229 18:21:43.260363       1 main.go:223] Handling node with IPs: map[192.168.39.78:{}]
	I0229 18:21:43.260488       1 main.go:250] Node multinode-051105-m03 has CIDR [10.244.3.0/24] 
	I0229 18:21:53.270453       1 main.go:223] Handling node with IPs: map[192.168.39.200:{}]
	I0229 18:21:53.270738       1 main.go:227] handling current node
	I0229 18:21:53.270769       1 main.go:223] Handling node with IPs: map[192.168.39.104:{}]
	I0229 18:21:53.270789       1 main.go:250] Node multinode-051105-m02 has CIDR [10.244.1.0/24] 
	I0229 18:21:53.270940       1 main.go:223] Handling node with IPs: map[192.168.39.78:{}]
	I0229 18:21:53.270963       1 main.go:250] Node multinode-051105-m03 has CIDR [10.244.3.0/24] 
	I0229 18:22:03.277055       1 main.go:223] Handling node with IPs: map[192.168.39.200:{}]
	I0229 18:22:03.277105       1 main.go:227] handling current node
	I0229 18:22:03.277124       1 main.go:223] Handling node with IPs: map[192.168.39.104:{}]
	I0229 18:22:03.277131       1 main.go:250] Node multinode-051105-m02 has CIDR [10.244.1.0/24] 
	I0229 18:22:03.277267       1 main.go:223] Handling node with IPs: map[192.168.39.78:{}]
	I0229 18:22:03.277302       1 main.go:250] Node multinode-051105-m03 has CIDR [10.244.3.0/24] 
	I0229 18:22:13.289285       1 main.go:223] Handling node with IPs: map[192.168.39.200:{}]
	I0229 18:22:13.289329       1 main.go:227] handling current node
	I0229 18:22:13.289339       1 main.go:223] Handling node with IPs: map[192.168.39.104:{}]
	I0229 18:22:13.289345       1 main.go:250] Node multinode-051105-m02 has CIDR [10.244.1.0/24] 
	I0229 18:22:13.289461       1 main.go:223] Handling node with IPs: map[192.168.39.78:{}]
	I0229 18:22:13.289491       1 main.go:250] Node multinode-051105-m03 has CIDR [10.244.2.0/24] 
	I0229 18:22:13.289622       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.168.39.78 Flags: [] Table: 0} 
	
	
	==> kube-apiserver [2fcb146cde6242341bbd1019b6f7eadbcd080f92d7db2c9b8194f1cdfe6b4653] <==
	I0229 18:18:27.147900       1 apf_controller.go:372] Starting API Priority and Fairness config controller
	I0229 18:18:27.185091       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0229 18:18:27.185240       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0229 18:18:27.308183       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0229 18:18:27.346365       1 shared_informer.go:318] Caches are synced for configmaps
	I0229 18:18:27.348690       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0229 18:18:27.348959       1 aggregator.go:166] initial CRD sync complete...
	I0229 18:18:27.349022       1 autoregister_controller.go:141] Starting autoregister controller
	I0229 18:18:27.349048       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0229 18:18:27.353292       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0229 18:18:27.353381       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0229 18:18:27.353389       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0229 18:18:27.359441       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0229 18:18:27.427233       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0229 18:18:27.446656       1 cache.go:39] Caches are synced for AvailableConditionController controller
	E0229 18:18:27.449847       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0229 18:18:27.460847       1 cache.go:39] Caches are synced for autoregister controller
	I0229 18:18:28.149665       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0229 18:18:29.795273       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0229 18:18:29.939272       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0229 18:18:29.950952       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0229 18:18:30.026211       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0229 18:18:30.034158       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0229 18:18:40.180956       1 controller.go:624] quota admission added evaluator for: endpoints
	I0229 18:18:40.211219       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [d35170f1ce10a27f3459825ce5733e3c6a8096489dd2a7ca80927ea17b79b9b4] <==
	I0229 18:20:24.667209       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-051105-m02\" does not exist"
	I0229 18:20:24.667408       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-m9jth" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-m9jth"
	I0229 18:20:24.685356       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-051105-m02" podCIDRs=["10.244.1.0/24"]
	I0229 18:20:24.700955       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-051105-m02"
	I0229 18:20:24.964987       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="13.723046ms"
	I0229 18:20:24.966108       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="64.786µs"
	I0229 18:20:25.552316       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="65.122µs"
	I0229 18:20:35.100693       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="115.972µs"
	I0229 18:20:36.398364       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="116.785µs"
	I0229 18:20:36.403475       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="234.059µs"
	I0229 18:21:02.588306       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-051105-m02"
	I0229 18:22:08.212411       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-ptxqq"
	I0229 18:22:08.225331       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="22.973732ms"
	I0229 18:22:08.251704       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="24.92555ms"
	I0229 18:22:08.251838       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="41.68µs"
	I0229 18:22:08.251999       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="83.691µs"
	I0229 18:22:09.693317       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="6.251768ms"
	I0229 18:22:09.694293       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="32.372µs"
	I0229 18:22:11.224075       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-051105-m02"
	I0229 18:22:11.859929       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-051105-m03\" does not exist"
	I0229 18:22:11.861047       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-051105-m02"
	I0229 18:22:11.863027       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-25wqb" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-25wqb"
	I0229 18:22:11.873177       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-051105-m03" podCIDRs=["10.244.2.0/24"]
	I0229 18:22:11.999924       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-051105-m02"
	I0229 18:22:12.778485       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="98.239µs"
	
	
	==> kube-proxy [b17238dcb7b4118e9bee9b92dde05f124b6f128e39fb1f7c5a71436eb14fb89f] <==
	I0229 18:18:27.990065       1 server_others.go:69] "Using iptables proxy"
	I0229 18:18:28.000444       1 node.go:141] Successfully retrieved node IP: 192.168.39.200
	I0229 18:18:28.060086       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0229 18:18:28.060157       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0229 18:18:28.064019       1 server_others.go:152] "Using iptables Proxier"
	I0229 18:18:28.064090       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0229 18:18:28.064356       1 server.go:846] "Version info" version="v1.28.4"
	I0229 18:18:28.064540       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0229 18:18:28.065298       1 config.go:188] "Starting service config controller"
	I0229 18:18:28.065362       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0229 18:18:28.065396       1 config.go:97] "Starting endpoint slice config controller"
	I0229 18:18:28.065411       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0229 18:18:28.066089       1 config.go:315] "Starting node config controller"
	I0229 18:18:28.066223       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0229 18:18:28.166121       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0229 18:18:28.166275       1 shared_informer.go:318] Caches are synced for node config
	I0229 18:18:28.166199       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [a67822f33e6a3172211dc238404728123c7cbb2022efc3321523c8138e2a58bb] <==
	I0229 18:18:25.034300       1 serving.go:348] Generated self-signed cert in-memory
	W0229 18:18:27.213267       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0229 18:18:27.213385       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0229 18:18:27.213497       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0229 18:18:27.213536       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0229 18:18:27.336034       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0229 18:18:27.336124       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0229 18:18:27.347623       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0229 18:18:27.347673       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0229 18:18:27.348437       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0229 18:18:27.350747       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0229 18:18:27.447901       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 29 18:18:31 multinode-051105 kubelet[875]: E0229 18:18:31.354356     875 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-5b5d89c9d6-dl8t4" podUID="46a6e5a5-9f89-4d6b-9558-553aab29a151"
	Feb 29 18:18:33 multinode-051105 kubelet[875]: E0229 18:18:33.353116     875 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-bwhnb" podUID="a3853502-49ad-4d24-8c63-3000e4f4aa8e"
	Feb 29 18:18:33 multinode-051105 kubelet[875]: E0229 18:18:33.354010     875 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-5b5d89c9d6-dl8t4" podUID="46a6e5a5-9f89-4d6b-9558-553aab29a151"
	Feb 29 18:18:33 multinode-051105 kubelet[875]: I0229 18:18:33.358669     875 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Feb 29 18:18:35 multinode-051105 kubelet[875]: E0229 18:18:35.030503     875 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Feb 29 18:18:35 multinode-051105 kubelet[875]: E0229 18:18:35.030630     875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a3853502-49ad-4d24-8c63-3000e4f4aa8e-config-volume podName:a3853502-49ad-4d24-8c63-3000e4f4aa8e nodeName:}" failed. No retries permitted until 2024-02-29 18:18:43.030557909 +0000 UTC m=+19.936037796 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a3853502-49ad-4d24-8c63-3000e4f4aa8e-config-volume") pod "coredns-5dd5756b68-bwhnb" (UID: "a3853502-49ad-4d24-8c63-3000e4f4aa8e") : object "kube-system"/"coredns" not registered
	Feb 29 18:18:35 multinode-051105 kubelet[875]: E0229 18:18:35.131043     875 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Feb 29 18:18:35 multinode-051105 kubelet[875]: E0229 18:18:35.131176     875 projected.go:198] Error preparing data for projected volume kube-api-access-glzmm for pod default/busybox-5b5d89c9d6-dl8t4: object "default"/"kube-root-ca.crt" not registered
	Feb 29 18:18:35 multinode-051105 kubelet[875]: E0229 18:18:35.131286     875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/46a6e5a5-9f89-4d6b-9558-553aab29a151-kube-api-access-glzmm podName:46a6e5a5-9f89-4d6b-9558-553aab29a151 nodeName:}" failed. No retries permitted until 2024-02-29 18:18:43.1312683 +0000 UTC m=+20.036748177 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-glzmm" (UniqueName: "kubernetes.io/projected/46a6e5a5-9f89-4d6b-9558-553aab29a151-kube-api-access-glzmm") pod "busybox-5b5d89c9d6-dl8t4" (UID: "46a6e5a5-9f89-4d6b-9558-553aab29a151") : object "default"/"kube-root-ca.crt" not registered
	Feb 29 18:18:58 multinode-051105 kubelet[875]: I0229 18:18:58.533709     875 scope.go:117] "RemoveContainer" containerID="019774951b46486ab3968f4810b1ff3d3b26e23a2541517664ee4cc3e1d9cd1f"
	Feb 29 18:19:23 multinode-051105 kubelet[875]: E0229 18:19:23.369876     875 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 29 18:19:23 multinode-051105 kubelet[875]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 29 18:19:23 multinode-051105 kubelet[875]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 29 18:19:23 multinode-051105 kubelet[875]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 29 18:19:23 multinode-051105 kubelet[875]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 29 18:20:23 multinode-051105 kubelet[875]: E0229 18:20:23.369758     875 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 29 18:20:23 multinode-051105 kubelet[875]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 29 18:20:23 multinode-051105 kubelet[875]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 29 18:20:23 multinode-051105 kubelet[875]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 29 18:20:23 multinode-051105 kubelet[875]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 29 18:21:23 multinode-051105 kubelet[875]: E0229 18:21:23.368242     875 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 29 18:21:23 multinode-051105 kubelet[875]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 29 18:21:23 multinode-051105 kubelet[875]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 29 18:21:23 multinode-051105 kubelet[875]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 29 18:21:23 multinode-051105 kubelet[875]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-051105 -n multinode-051105
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-051105 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (690.50s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (142.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051105 stop
E0229 18:22:43.785705   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/addons-848237/client.crt: no such file or directory
E0229 18:22:46.665596   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/functional-531072/client.crt: no such file or directory
multinode_test.go:342: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-051105 stop: exit status 82 (2m0.263614147s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-051105"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:344: node stop returned an error. args "out/minikube-linux-amd64 -p multinode-051105 stop": exit status 82
multinode_test.go:348: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051105 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-051105 status: exit status 3 (18.691442787s)

                                                
                                                
-- stdout --
	multinode-051105
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	multinode-051105-m02
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 18:24:38.579395   33349 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.200:22: connect: no route to host
	E0229 18:24:38.579434   33349 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.200:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:351: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-051105 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-051105 -n multinode-051105
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p multinode-051105 -n multinode-051105: exit status 3 (3.164141346s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 18:24:41.907398   33456 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.200:22: connect: no route to host
	E0229 18:24:41.907421   33456 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.200:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "multinode-051105" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiNode/serial/StopMultiNode (142.12s)

                                                
                                    
x
+
TestPreload (348.92s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-771718 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-771718 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (3m26.425452469s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-771718 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-771718 image pull gcr.io/k8s-minikube/busybox: (2.796174288s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-771718
E0229 18:37:43.785801   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/addons-848237/client.crt: no such file or directory
E0229 18:37:46.663172   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/functional-531072/client.crt: no such file or directory
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-771718: exit status 82 (2m0.263493945s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-771718"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-771718 failed: exit status 82
panic.go:626: *** TestPreload FAILED at 2024-02-29 18:38:29.44426837 +0000 UTC m=+3666.939252155
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-771718 -n test-preload-771718
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-771718 -n test-preload-771718: exit status 3 (18.547125425s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 18:38:47.987369   36638 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.120:22: connect: no route to host
	E0229 18:38:47.987393   36638 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.120:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-771718" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-771718" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-771718
--- FAIL: TestPreload (348.92s)

                                                
                                    
x
+
TestKubernetesUpgrade (418.76s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-541086 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-541086 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m56.775702621s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-541086] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18259
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18259-6428/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6428/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting control plane node kubernetes-upgrade-541086 in cluster kubernetes-upgrade-541086
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.16.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 18:40:49.662536   37609 out.go:291] Setting OutFile to fd 1 ...
	I0229 18:40:49.662770   37609 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:40:49.662779   37609 out.go:304] Setting ErrFile to fd 2...
	I0229 18:40:49.662783   37609 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:40:49.662960   37609 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6428/.minikube/bin
	I0229 18:40:49.663501   37609 out.go:298] Setting JSON to false
	I0229 18:40:49.664155   37609 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4994,"bootTime":1709227056,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 18:40:49.664210   37609 start.go:139] virtualization: kvm guest
	I0229 18:40:49.666470   37609 out.go:177] * [kubernetes-upgrade-541086] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 18:40:49.668671   37609 out.go:177]   - MINIKUBE_LOCATION=18259
	I0229 18:40:49.667669   37609 notify.go:220] Checking for updates...
	I0229 18:40:49.671612   37609 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 18:40:49.673683   37609 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18259-6428/kubeconfig
	I0229 18:40:49.675225   37609 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6428/.minikube
	I0229 18:40:49.677081   37609 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 18:40:49.678422   37609 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 18:40:49.679814   37609 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 18:40:49.720830   37609 out.go:177] * Using the kvm2 driver based on user configuration
	I0229 18:40:49.722161   37609 start.go:299] selected driver: kvm2
	I0229 18:40:49.722180   37609 start.go:903] validating driver "kvm2" against <nil>
	I0229 18:40:49.722195   37609 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 18:40:49.723220   37609 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:40:49.737541   37609 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18259-6428/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 18:40:49.752812   37609 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 18:40:49.752857   37609 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 18:40:49.753064   37609 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0229 18:40:49.753129   37609 cni.go:84] Creating CNI manager for ""
	I0229 18:40:49.753145   37609 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 18:40:49.753158   37609 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0229 18:40:49.753189   37609 start_flags.go:323] config:
	{Name:kubernetes-upgrade-541086 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-541086 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:40:49.753306   37609 iso.go:125] acquiring lock: {Name:mk4b55cee5696fff78a1389276e4e011ad655e56 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:40:49.754847   37609 out.go:177] * Starting control plane node kubernetes-upgrade-541086 in cluster kubernetes-upgrade-541086
	I0229 18:40:49.756065   37609 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0229 18:40:49.756112   37609 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0229 18:40:49.756121   37609 cache.go:56] Caching tarball of preloaded images
	I0229 18:40:49.756203   37609 preload.go:174] Found /home/jenkins/minikube-integration/18259-6428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0229 18:40:49.756218   37609 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I0229 18:40:49.756637   37609 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/kubernetes-upgrade-541086/config.json ...
	I0229 18:40:49.756664   37609 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/kubernetes-upgrade-541086/config.json: {Name:mk76a7b4fccea5321de57707afa9dea3243eef95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:40:49.756807   37609 start.go:365] acquiring machines lock for kubernetes-upgrade-541086: {Name:mk08b242c6b6b7cd4a6843a7d3821a8b435540dd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 18:41:16.532302   37609 start.go:369] acquired machines lock for "kubernetes-upgrade-541086" in 26.775459733s
	I0229 18:41:16.532382   37609 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-541086 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kube
rnetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-541086 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0229 18:41:16.532527   37609 start.go:125] createHost starting for "" (driver="kvm2")
	I0229 18:41:16.534468   37609 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0229 18:41:16.534634   37609 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 18:41:16.534690   37609 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:41:16.550972   37609 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40269
	I0229 18:41:16.551357   37609 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:41:16.551903   37609 main.go:141] libmachine: Using API Version  1
	I0229 18:41:16.551920   37609 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:41:16.552271   37609 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:41:16.552472   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetMachineName
	I0229 18:41:16.552623   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .DriverName
	I0229 18:41:16.552797   37609 start.go:159] libmachine.API.Create for "kubernetes-upgrade-541086" (driver="kvm2")
	I0229 18:41:16.552833   37609 client.go:168] LocalClient.Create starting
	I0229 18:41:16.552867   37609 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem
	I0229 18:41:16.552910   37609 main.go:141] libmachine: Decoding PEM data...
	I0229 18:41:16.552932   37609 main.go:141] libmachine: Parsing certificate...
	I0229 18:41:16.552999   37609 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem
	I0229 18:41:16.553025   37609 main.go:141] libmachine: Decoding PEM data...
	I0229 18:41:16.553045   37609 main.go:141] libmachine: Parsing certificate...
	I0229 18:41:16.553071   37609 main.go:141] libmachine: Running pre-create checks...
	I0229 18:41:16.553087   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .PreCreateCheck
	I0229 18:41:16.553417   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetConfigRaw
	I0229 18:41:16.553838   37609 main.go:141] libmachine: Creating machine...
	I0229 18:41:16.553855   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .Create
	I0229 18:41:16.553965   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Creating KVM machine...
	I0229 18:41:16.554973   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | found existing default KVM network
	I0229 18:41:16.555723   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | I0229 18:41:16.555585   37901 network.go:212] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:68:d8:ca} reservation:<nil>}
	I0229 18:41:16.556344   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | I0229 18:41:16.556263   37901 network.go:207] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000252330}
	I0229 18:41:16.561440   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | trying to create private KVM network mk-kubernetes-upgrade-541086 192.168.50.0/24...
	I0229 18:41:16.628027   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | private KVM network mk-kubernetes-upgrade-541086 192.168.50.0/24 created
	I0229 18:41:16.628057   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | I0229 18:41:16.627982   37901 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18259-6428/.minikube
	I0229 18:41:16.628071   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Setting up store path in /home/jenkins/minikube-integration/18259-6428/.minikube/machines/kubernetes-upgrade-541086 ...
	I0229 18:41:16.628085   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Building disk image from file:///home/jenkins/minikube-integration/18259-6428/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso
	I0229 18:41:16.628139   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Downloading /home/jenkins/minikube-integration/18259-6428/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18259-6428/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0229 18:41:16.847055   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | I0229 18:41:16.846890   37901 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/kubernetes-upgrade-541086/id_rsa...
	I0229 18:41:17.081162   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | I0229 18:41:17.081028   37901 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/kubernetes-upgrade-541086/kubernetes-upgrade-541086.rawdisk...
	I0229 18:41:17.081196   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | Writing magic tar header
	I0229 18:41:17.081215   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | Writing SSH key tar header
	I0229 18:41:17.081224   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | I0229 18:41:17.081154   37901 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18259-6428/.minikube/machines/kubernetes-upgrade-541086 ...
	I0229 18:41:17.081286   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/kubernetes-upgrade-541086
	I0229 18:41:17.081315   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6428/.minikube/machines
	I0229 18:41:17.081328   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Setting executable bit set on /home/jenkins/minikube-integration/18259-6428/.minikube/machines/kubernetes-upgrade-541086 (perms=drwx------)
	I0229 18:41:17.081355   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6428/.minikube
	I0229 18:41:17.081374   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6428
	I0229 18:41:17.081384   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0229 18:41:17.081404   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | Checking permissions on dir: /home/jenkins
	I0229 18:41:17.081420   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Setting executable bit set on /home/jenkins/minikube-integration/18259-6428/.minikube/machines (perms=drwxr-xr-x)
	I0229 18:41:17.081438   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Setting executable bit set on /home/jenkins/minikube-integration/18259-6428/.minikube (perms=drwxr-xr-x)
	I0229 18:41:17.081451   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Setting executable bit set on /home/jenkins/minikube-integration/18259-6428 (perms=drwxrwxr-x)
	I0229 18:41:17.081465   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0229 18:41:17.081479   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0229 18:41:17.081488   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | Checking permissions on dir: /home
	I0229 18:41:17.081499   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | Skipping /home - not owner
	I0229 18:41:17.081509   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Creating domain...
	I0229 18:41:17.082621   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) define libvirt domain using xml: 
	I0229 18:41:17.082662   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) <domain type='kvm'>
	I0229 18:41:17.082673   37609 main.go:141] libmachine: (kubernetes-upgrade-541086)   <name>kubernetes-upgrade-541086</name>
	I0229 18:41:17.082689   37609 main.go:141] libmachine: (kubernetes-upgrade-541086)   <memory unit='MiB'>2200</memory>
	I0229 18:41:17.082699   37609 main.go:141] libmachine: (kubernetes-upgrade-541086)   <vcpu>2</vcpu>
	I0229 18:41:17.082710   37609 main.go:141] libmachine: (kubernetes-upgrade-541086)   <features>
	I0229 18:41:17.082719   37609 main.go:141] libmachine: (kubernetes-upgrade-541086)     <acpi/>
	I0229 18:41:17.082727   37609 main.go:141] libmachine: (kubernetes-upgrade-541086)     <apic/>
	I0229 18:41:17.082736   37609 main.go:141] libmachine: (kubernetes-upgrade-541086)     <pae/>
	I0229 18:41:17.082747   37609 main.go:141] libmachine: (kubernetes-upgrade-541086)     
	I0229 18:41:17.082758   37609 main.go:141] libmachine: (kubernetes-upgrade-541086)   </features>
	I0229 18:41:17.082768   37609 main.go:141] libmachine: (kubernetes-upgrade-541086)   <cpu mode='host-passthrough'>
	I0229 18:41:17.082812   37609 main.go:141] libmachine: (kubernetes-upgrade-541086)   
	I0229 18:41:17.082839   37609 main.go:141] libmachine: (kubernetes-upgrade-541086)   </cpu>
	I0229 18:41:17.082853   37609 main.go:141] libmachine: (kubernetes-upgrade-541086)   <os>
	I0229 18:41:17.082866   37609 main.go:141] libmachine: (kubernetes-upgrade-541086)     <type>hvm</type>
	I0229 18:41:17.082876   37609 main.go:141] libmachine: (kubernetes-upgrade-541086)     <boot dev='cdrom'/>
	I0229 18:41:17.082888   37609 main.go:141] libmachine: (kubernetes-upgrade-541086)     <boot dev='hd'/>
	I0229 18:41:17.082901   37609 main.go:141] libmachine: (kubernetes-upgrade-541086)     <bootmenu enable='no'/>
	I0229 18:41:17.082911   37609 main.go:141] libmachine: (kubernetes-upgrade-541086)   </os>
	I0229 18:41:17.082919   37609 main.go:141] libmachine: (kubernetes-upgrade-541086)   <devices>
	I0229 18:41:17.082936   37609 main.go:141] libmachine: (kubernetes-upgrade-541086)     <disk type='file' device='cdrom'>
	I0229 18:41:17.082965   37609 main.go:141] libmachine: (kubernetes-upgrade-541086)       <source file='/home/jenkins/minikube-integration/18259-6428/.minikube/machines/kubernetes-upgrade-541086/boot2docker.iso'/>
	I0229 18:41:17.082983   37609 main.go:141] libmachine: (kubernetes-upgrade-541086)       <target dev='hdc' bus='scsi'/>
	I0229 18:41:17.082995   37609 main.go:141] libmachine: (kubernetes-upgrade-541086)       <readonly/>
	I0229 18:41:17.083005   37609 main.go:141] libmachine: (kubernetes-upgrade-541086)     </disk>
	I0229 18:41:17.083041   37609 main.go:141] libmachine: (kubernetes-upgrade-541086)     <disk type='file' device='disk'>
	I0229 18:41:17.083085   37609 main.go:141] libmachine: (kubernetes-upgrade-541086)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0229 18:41:17.083106   37609 main.go:141] libmachine: (kubernetes-upgrade-541086)       <source file='/home/jenkins/minikube-integration/18259-6428/.minikube/machines/kubernetes-upgrade-541086/kubernetes-upgrade-541086.rawdisk'/>
	I0229 18:41:17.083119   37609 main.go:141] libmachine: (kubernetes-upgrade-541086)       <target dev='hda' bus='virtio'/>
	I0229 18:41:17.083129   37609 main.go:141] libmachine: (kubernetes-upgrade-541086)     </disk>
	I0229 18:41:17.083136   37609 main.go:141] libmachine: (kubernetes-upgrade-541086)     <interface type='network'>
	I0229 18:41:17.083147   37609 main.go:141] libmachine: (kubernetes-upgrade-541086)       <source network='mk-kubernetes-upgrade-541086'/>
	I0229 18:41:17.083158   37609 main.go:141] libmachine: (kubernetes-upgrade-541086)       <model type='virtio'/>
	I0229 18:41:17.083163   37609 main.go:141] libmachine: (kubernetes-upgrade-541086)     </interface>
	I0229 18:41:17.083168   37609 main.go:141] libmachine: (kubernetes-upgrade-541086)     <interface type='network'>
	I0229 18:41:17.083173   37609 main.go:141] libmachine: (kubernetes-upgrade-541086)       <source network='default'/>
	I0229 18:41:17.083178   37609 main.go:141] libmachine: (kubernetes-upgrade-541086)       <model type='virtio'/>
	I0229 18:41:17.083183   37609 main.go:141] libmachine: (kubernetes-upgrade-541086)     </interface>
	I0229 18:41:17.083187   37609 main.go:141] libmachine: (kubernetes-upgrade-541086)     <serial type='pty'>
	I0229 18:41:17.083192   37609 main.go:141] libmachine: (kubernetes-upgrade-541086)       <target port='0'/>
	I0229 18:41:17.083196   37609 main.go:141] libmachine: (kubernetes-upgrade-541086)     </serial>
	I0229 18:41:17.083201   37609 main.go:141] libmachine: (kubernetes-upgrade-541086)     <console type='pty'>
	I0229 18:41:17.083205   37609 main.go:141] libmachine: (kubernetes-upgrade-541086)       <target type='serial' port='0'/>
	I0229 18:41:17.083210   37609 main.go:141] libmachine: (kubernetes-upgrade-541086)     </console>
	I0229 18:41:17.083214   37609 main.go:141] libmachine: (kubernetes-upgrade-541086)     <rng model='virtio'>
	I0229 18:41:17.083220   37609 main.go:141] libmachine: (kubernetes-upgrade-541086)       <backend model='random'>/dev/random</backend>
	I0229 18:41:17.083223   37609 main.go:141] libmachine: (kubernetes-upgrade-541086)     </rng>
	I0229 18:41:17.083228   37609 main.go:141] libmachine: (kubernetes-upgrade-541086)     
	I0229 18:41:17.083232   37609 main.go:141] libmachine: (kubernetes-upgrade-541086)     
	I0229 18:41:17.083251   37609 main.go:141] libmachine: (kubernetes-upgrade-541086)   </devices>
	I0229 18:41:17.083271   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) </domain>
	I0229 18:41:17.083299   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) 
	I0229 18:41:17.087959   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | domain kubernetes-upgrade-541086 has defined MAC address 52:54:00:8b:1c:4f in network default
	I0229 18:41:17.088537   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Ensuring networks are active...
	I0229 18:41:17.088558   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | domain kubernetes-upgrade-541086 has defined MAC address 52:54:00:2d:88:b9 in network mk-kubernetes-upgrade-541086
	I0229 18:41:17.089311   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Ensuring network default is active
	I0229 18:41:17.089699   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Ensuring network mk-kubernetes-upgrade-541086 is active
	I0229 18:41:17.090284   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Getting domain xml...
	I0229 18:41:17.091130   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Creating domain...
	I0229 18:41:18.308395   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Waiting to get IP...
	I0229 18:41:18.309357   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | domain kubernetes-upgrade-541086 has defined MAC address 52:54:00:2d:88:b9 in network mk-kubernetes-upgrade-541086
	I0229 18:41:18.309781   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | unable to find current IP address of domain kubernetes-upgrade-541086 in network mk-kubernetes-upgrade-541086
	I0229 18:41:18.309806   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | I0229 18:41:18.309760   37901 retry.go:31] will retry after 261.208708ms: waiting for machine to come up
	I0229 18:41:18.572236   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | domain kubernetes-upgrade-541086 has defined MAC address 52:54:00:2d:88:b9 in network mk-kubernetes-upgrade-541086
	I0229 18:41:18.572763   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | unable to find current IP address of domain kubernetes-upgrade-541086 in network mk-kubernetes-upgrade-541086
	I0229 18:41:18.572797   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | I0229 18:41:18.572708   37901 retry.go:31] will retry after 254.407395ms: waiting for machine to come up
	I0229 18:41:18.829180   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | domain kubernetes-upgrade-541086 has defined MAC address 52:54:00:2d:88:b9 in network mk-kubernetes-upgrade-541086
	I0229 18:41:18.829632   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | unable to find current IP address of domain kubernetes-upgrade-541086 in network mk-kubernetes-upgrade-541086
	I0229 18:41:18.829677   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | I0229 18:41:18.829604   37901 retry.go:31] will retry after 423.656443ms: waiting for machine to come up
	I0229 18:41:19.255305   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | domain kubernetes-upgrade-541086 has defined MAC address 52:54:00:2d:88:b9 in network mk-kubernetes-upgrade-541086
	I0229 18:41:19.255891   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | unable to find current IP address of domain kubernetes-upgrade-541086 in network mk-kubernetes-upgrade-541086
	I0229 18:41:19.255919   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | I0229 18:41:19.255849   37901 retry.go:31] will retry after 495.924754ms: waiting for machine to come up
	I0229 18:41:19.753139   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | domain kubernetes-upgrade-541086 has defined MAC address 52:54:00:2d:88:b9 in network mk-kubernetes-upgrade-541086
	I0229 18:41:19.753582   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | unable to find current IP address of domain kubernetes-upgrade-541086 in network mk-kubernetes-upgrade-541086
	I0229 18:41:19.753612   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | I0229 18:41:19.753533   37901 retry.go:31] will retry after 615.538787ms: waiting for machine to come up
	I0229 18:41:20.370339   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | domain kubernetes-upgrade-541086 has defined MAC address 52:54:00:2d:88:b9 in network mk-kubernetes-upgrade-541086
	I0229 18:41:20.370884   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | unable to find current IP address of domain kubernetes-upgrade-541086 in network mk-kubernetes-upgrade-541086
	I0229 18:41:20.370909   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | I0229 18:41:20.370846   37901 retry.go:31] will retry after 668.229587ms: waiting for machine to come up
	I0229 18:41:21.041152   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | domain kubernetes-upgrade-541086 has defined MAC address 52:54:00:2d:88:b9 in network mk-kubernetes-upgrade-541086
	I0229 18:41:21.041662   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | unable to find current IP address of domain kubernetes-upgrade-541086 in network mk-kubernetes-upgrade-541086
	I0229 18:41:21.041695   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | I0229 18:41:21.041606   37901 retry.go:31] will retry after 768.629576ms: waiting for machine to come up
	I0229 18:41:21.812066   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | domain kubernetes-upgrade-541086 has defined MAC address 52:54:00:2d:88:b9 in network mk-kubernetes-upgrade-541086
	I0229 18:41:21.812495   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | unable to find current IP address of domain kubernetes-upgrade-541086 in network mk-kubernetes-upgrade-541086
	I0229 18:41:21.812522   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | I0229 18:41:21.812452   37901 retry.go:31] will retry after 1.001490885s: waiting for machine to come up
	I0229 18:41:22.815546   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | domain kubernetes-upgrade-541086 has defined MAC address 52:54:00:2d:88:b9 in network mk-kubernetes-upgrade-541086
	I0229 18:41:22.815946   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | unable to find current IP address of domain kubernetes-upgrade-541086 in network mk-kubernetes-upgrade-541086
	I0229 18:41:22.815978   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | I0229 18:41:22.815900   37901 retry.go:31] will retry after 1.30044569s: waiting for machine to come up
	I0229 18:41:24.118333   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | domain kubernetes-upgrade-541086 has defined MAC address 52:54:00:2d:88:b9 in network mk-kubernetes-upgrade-541086
	I0229 18:41:24.118767   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | unable to find current IP address of domain kubernetes-upgrade-541086 in network mk-kubernetes-upgrade-541086
	I0229 18:41:24.118794   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | I0229 18:41:24.118715   37901 retry.go:31] will retry after 1.4480792s: waiting for machine to come up
	I0229 18:41:25.568275   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | domain kubernetes-upgrade-541086 has defined MAC address 52:54:00:2d:88:b9 in network mk-kubernetes-upgrade-541086
	I0229 18:41:25.568670   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | unable to find current IP address of domain kubernetes-upgrade-541086 in network mk-kubernetes-upgrade-541086
	I0229 18:41:25.568692   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | I0229 18:41:25.568630   37901 retry.go:31] will retry after 2.120708689s: waiting for machine to come up
	I0229 18:41:27.691380   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | domain kubernetes-upgrade-541086 has defined MAC address 52:54:00:2d:88:b9 in network mk-kubernetes-upgrade-541086
	I0229 18:41:27.691809   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | unable to find current IP address of domain kubernetes-upgrade-541086 in network mk-kubernetes-upgrade-541086
	I0229 18:41:27.691838   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | I0229 18:41:27.691759   37901 retry.go:31] will retry after 3.35623364s: waiting for machine to come up
	I0229 18:41:31.052016   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | domain kubernetes-upgrade-541086 has defined MAC address 52:54:00:2d:88:b9 in network mk-kubernetes-upgrade-541086
	I0229 18:41:31.052305   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | unable to find current IP address of domain kubernetes-upgrade-541086 in network mk-kubernetes-upgrade-541086
	I0229 18:41:31.052335   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | I0229 18:41:31.052267   37901 retry.go:31] will retry after 3.864263688s: waiting for machine to come up
	I0229 18:41:34.920251   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | domain kubernetes-upgrade-541086 has defined MAC address 52:54:00:2d:88:b9 in network mk-kubernetes-upgrade-541086
	I0229 18:41:34.920744   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | unable to find current IP address of domain kubernetes-upgrade-541086 in network mk-kubernetes-upgrade-541086
	I0229 18:41:34.920767   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | I0229 18:41:34.920704   37901 retry.go:31] will retry after 4.862475115s: waiting for machine to come up
	I0229 18:41:39.784826   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | domain kubernetes-upgrade-541086 has defined MAC address 52:54:00:2d:88:b9 in network mk-kubernetes-upgrade-541086
	I0229 18:41:39.785322   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Found IP for machine: 192.168.50.47
	I0229 18:41:39.785360   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Reserving static IP address...
	I0229 18:41:39.785376   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | domain kubernetes-upgrade-541086 has current primary IP address 192.168.50.47 and MAC address 52:54:00:2d:88:b9 in network mk-kubernetes-upgrade-541086
	I0229 18:41:39.785576   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-541086", mac: "52:54:00:2d:88:b9", ip: "192.168.50.47"} in network mk-kubernetes-upgrade-541086
	I0229 18:41:39.857295   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | Getting to WaitForSSH function...
	I0229 18:41:39.857328   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Reserved static IP address: 192.168.50.47
	I0229 18:41:39.857342   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Waiting for SSH to be available...
	I0229 18:41:39.859843   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | domain kubernetes-upgrade-541086 has defined MAC address 52:54:00:2d:88:b9 in network mk-kubernetes-upgrade-541086
	I0229 18:41:39.860257   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:88:b9", ip: ""} in network mk-kubernetes-upgrade-541086: {Iface:virbr2 ExpiryTime:2024-02-29 19:41:32 +0000 UTC Type:0 Mac:52:54:00:2d:88:b9 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:minikube Clientid:01:52:54:00:2d:88:b9}
	I0229 18:41:39.860286   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | domain kubernetes-upgrade-541086 has defined IP address 192.168.50.47 and MAC address 52:54:00:2d:88:b9 in network mk-kubernetes-upgrade-541086
	I0229 18:41:39.860475   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | Using SSH client type: external
	I0229 18:41:39.860514   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | Using SSH private key: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/kubernetes-upgrade-541086/id_rsa (-rw-------)
	I0229 18:41:39.860544   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.47 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18259-6428/.minikube/machines/kubernetes-upgrade-541086/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 18:41:39.860558   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | About to run SSH command:
	I0229 18:41:39.860576   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | exit 0
	I0229 18:41:39.991061   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | SSH cmd err, output: <nil>: 
	I0229 18:41:39.991333   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) KVM machine creation complete!
	I0229 18:41:39.991655   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetConfigRaw
	I0229 18:41:39.992204   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .DriverName
	I0229 18:41:39.992419   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .DriverName
	I0229 18:41:39.992652   37609 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0229 18:41:39.992672   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetState
	I0229 18:41:39.993872   37609 main.go:141] libmachine: Detecting operating system of created instance...
	I0229 18:41:39.993888   37609 main.go:141] libmachine: Waiting for SSH to be available...
	I0229 18:41:39.993897   37609 main.go:141] libmachine: Getting to WaitForSSH function...
	I0229 18:41:39.993906   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHHostname
	I0229 18:41:39.996174   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | domain kubernetes-upgrade-541086 has defined MAC address 52:54:00:2d:88:b9 in network mk-kubernetes-upgrade-541086
	I0229 18:41:39.996575   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:88:b9", ip: ""} in network mk-kubernetes-upgrade-541086: {Iface:virbr2 ExpiryTime:2024-02-29 19:41:32 +0000 UTC Type:0 Mac:52:54:00:2d:88:b9 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:kubernetes-upgrade-541086 Clientid:01:52:54:00:2d:88:b9}
	I0229 18:41:39.996602   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | domain kubernetes-upgrade-541086 has defined IP address 192.168.50.47 and MAC address 52:54:00:2d:88:b9 in network mk-kubernetes-upgrade-541086
	I0229 18:41:39.996753   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHPort
	I0229 18:41:39.996940   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHKeyPath
	I0229 18:41:39.997102   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHKeyPath
	I0229 18:41:39.997217   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHUsername
	I0229 18:41:39.997389   37609 main.go:141] libmachine: Using SSH client type: native
	I0229 18:41:39.997619   37609 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.47 22 <nil> <nil>}
	I0229 18:41:39.997632   37609 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0229 18:41:40.106335   37609 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 18:41:40.106363   37609 main.go:141] libmachine: Detecting the provisioner...
	I0229 18:41:40.106373   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHHostname
	I0229 18:41:40.108873   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | domain kubernetes-upgrade-541086 has defined MAC address 52:54:00:2d:88:b9 in network mk-kubernetes-upgrade-541086
	I0229 18:41:40.109226   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:88:b9", ip: ""} in network mk-kubernetes-upgrade-541086: {Iface:virbr2 ExpiryTime:2024-02-29 19:41:32 +0000 UTC Type:0 Mac:52:54:00:2d:88:b9 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:kubernetes-upgrade-541086 Clientid:01:52:54:00:2d:88:b9}
	I0229 18:41:40.109259   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | domain kubernetes-upgrade-541086 has defined IP address 192.168.50.47 and MAC address 52:54:00:2d:88:b9 in network mk-kubernetes-upgrade-541086
	I0229 18:41:40.109373   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHPort
	I0229 18:41:40.109570   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHKeyPath
	I0229 18:41:40.109696   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHKeyPath
	I0229 18:41:40.109829   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHUsername
	I0229 18:41:40.109972   37609 main.go:141] libmachine: Using SSH client type: native
	I0229 18:41:40.110184   37609 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.47 22 <nil> <nil>}
	I0229 18:41:40.110200   37609 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0229 18:41:40.220450   37609 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0229 18:41:40.220532   37609 main.go:141] libmachine: found compatible host: buildroot
	I0229 18:41:40.220542   37609 main.go:141] libmachine: Provisioning with buildroot...
	I0229 18:41:40.220550   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetMachineName
	I0229 18:41:40.220806   37609 buildroot.go:166] provisioning hostname "kubernetes-upgrade-541086"
	I0229 18:41:40.220839   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetMachineName
	I0229 18:41:40.221031   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHHostname
	I0229 18:41:40.223677   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | domain kubernetes-upgrade-541086 has defined MAC address 52:54:00:2d:88:b9 in network mk-kubernetes-upgrade-541086
	I0229 18:41:40.223978   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:88:b9", ip: ""} in network mk-kubernetes-upgrade-541086: {Iface:virbr2 ExpiryTime:2024-02-29 19:41:32 +0000 UTC Type:0 Mac:52:54:00:2d:88:b9 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:kubernetes-upgrade-541086 Clientid:01:52:54:00:2d:88:b9}
	I0229 18:41:40.224024   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | domain kubernetes-upgrade-541086 has defined IP address 192.168.50.47 and MAC address 52:54:00:2d:88:b9 in network mk-kubernetes-upgrade-541086
	I0229 18:41:40.224089   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHPort
	I0229 18:41:40.224294   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHKeyPath
	I0229 18:41:40.224477   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHKeyPath
	I0229 18:41:40.224664   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHUsername
	I0229 18:41:40.224840   37609 main.go:141] libmachine: Using SSH client type: native
	I0229 18:41:40.225048   37609 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.47 22 <nil> <nil>}
	I0229 18:41:40.225067   37609 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-541086 && echo "kubernetes-upgrade-541086" | sudo tee /etc/hostname
	I0229 18:41:40.354202   37609 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-541086
	
	I0229 18:41:40.354235   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHHostname
	I0229 18:41:40.357456   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | domain kubernetes-upgrade-541086 has defined MAC address 52:54:00:2d:88:b9 in network mk-kubernetes-upgrade-541086
	I0229 18:41:40.358006   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:88:b9", ip: ""} in network mk-kubernetes-upgrade-541086: {Iface:virbr2 ExpiryTime:2024-02-29 19:41:32 +0000 UTC Type:0 Mac:52:54:00:2d:88:b9 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:kubernetes-upgrade-541086 Clientid:01:52:54:00:2d:88:b9}
	I0229 18:41:40.358036   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | domain kubernetes-upgrade-541086 has defined IP address 192.168.50.47 and MAC address 52:54:00:2d:88:b9 in network mk-kubernetes-upgrade-541086
	I0229 18:41:40.358279   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHPort
	I0229 18:41:40.358563   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHKeyPath
	I0229 18:41:40.358736   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHKeyPath
	I0229 18:41:40.358973   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHUsername
	I0229 18:41:40.359262   37609 main.go:141] libmachine: Using SSH client type: native
	I0229 18:41:40.359480   37609 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.47 22 <nil> <nil>}
	I0229 18:41:40.359498   37609 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-541086' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-541086/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-541086' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 18:41:40.477934   37609 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 18:41:40.477961   37609 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18259-6428/.minikube CaCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18259-6428/.minikube}
	I0229 18:41:40.478005   37609 buildroot.go:174] setting up certificates
	I0229 18:41:40.478013   37609 provision.go:83] configureAuth start
	I0229 18:41:40.478022   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetMachineName
	I0229 18:41:40.478281   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetIP
	I0229 18:41:40.481188   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | domain kubernetes-upgrade-541086 has defined MAC address 52:54:00:2d:88:b9 in network mk-kubernetes-upgrade-541086
	I0229 18:41:40.481541   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:88:b9", ip: ""} in network mk-kubernetes-upgrade-541086: {Iface:virbr2 ExpiryTime:2024-02-29 19:41:32 +0000 UTC Type:0 Mac:52:54:00:2d:88:b9 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:kubernetes-upgrade-541086 Clientid:01:52:54:00:2d:88:b9}
	I0229 18:41:40.481569   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | domain kubernetes-upgrade-541086 has defined IP address 192.168.50.47 and MAC address 52:54:00:2d:88:b9 in network mk-kubernetes-upgrade-541086
	I0229 18:41:40.481743   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHHostname
	I0229 18:41:40.483813   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | domain kubernetes-upgrade-541086 has defined MAC address 52:54:00:2d:88:b9 in network mk-kubernetes-upgrade-541086
	I0229 18:41:40.484096   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:88:b9", ip: ""} in network mk-kubernetes-upgrade-541086: {Iface:virbr2 ExpiryTime:2024-02-29 19:41:32 +0000 UTC Type:0 Mac:52:54:00:2d:88:b9 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:kubernetes-upgrade-541086 Clientid:01:52:54:00:2d:88:b9}
	I0229 18:41:40.484116   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | domain kubernetes-upgrade-541086 has defined IP address 192.168.50.47 and MAC address 52:54:00:2d:88:b9 in network mk-kubernetes-upgrade-541086
	I0229 18:41:40.484262   37609 provision.go:138] copyHostCerts
	I0229 18:41:40.484327   37609 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem, removing ...
	I0229 18:41:40.484347   37609 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem
	I0229 18:41:40.484417   37609 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem (1082 bytes)
	I0229 18:41:40.484579   37609 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem, removing ...
	I0229 18:41:40.484589   37609 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem
	I0229 18:41:40.484613   37609 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem (1123 bytes)
	I0229 18:41:40.484683   37609 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem, removing ...
	I0229 18:41:40.484690   37609 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem
	I0229 18:41:40.484708   37609 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem (1675 bytes)
	I0229 18:41:40.484761   37609 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-541086 san=[192.168.50.47 192.168.50.47 localhost 127.0.0.1 minikube kubernetes-upgrade-541086]
	I0229 18:41:40.582793   37609 provision.go:172] copyRemoteCerts
	I0229 18:41:40.582847   37609 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 18:41:40.582869   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHHostname
	I0229 18:41:40.586536   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | domain kubernetes-upgrade-541086 has defined MAC address 52:54:00:2d:88:b9 in network mk-kubernetes-upgrade-541086
	I0229 18:41:40.586983   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:88:b9", ip: ""} in network mk-kubernetes-upgrade-541086: {Iface:virbr2 ExpiryTime:2024-02-29 19:41:32 +0000 UTC Type:0 Mac:52:54:00:2d:88:b9 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:kubernetes-upgrade-541086 Clientid:01:52:54:00:2d:88:b9}
	I0229 18:41:40.587012   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | domain kubernetes-upgrade-541086 has defined IP address 192.168.50.47 and MAC address 52:54:00:2d:88:b9 in network mk-kubernetes-upgrade-541086
	I0229 18:41:40.587151   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHPort
	I0229 18:41:40.587313   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHKeyPath
	I0229 18:41:40.587441   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHUsername
	I0229 18:41:40.587547   37609 sshutil.go:53] new ssh client: &{IP:192.168.50.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/kubernetes-upgrade-541086/id_rsa Username:docker}
	I0229 18:41:40.679113   37609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 18:41:40.705692   37609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0229 18:41:40.731730   37609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0229 18:41:40.760696   37609 provision.go:86] duration metric: configureAuth took 282.669712ms
	I0229 18:41:40.760727   37609 buildroot.go:189] setting minikube options for container-runtime
	I0229 18:41:40.760927   37609 config.go:182] Loaded profile config "kubernetes-upgrade-541086": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0229 18:41:40.761006   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHHostname
	I0229 18:41:40.763775   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | domain kubernetes-upgrade-541086 has defined MAC address 52:54:00:2d:88:b9 in network mk-kubernetes-upgrade-541086
	I0229 18:41:40.764151   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:88:b9", ip: ""} in network mk-kubernetes-upgrade-541086: {Iface:virbr2 ExpiryTime:2024-02-29 19:41:32 +0000 UTC Type:0 Mac:52:54:00:2d:88:b9 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:kubernetes-upgrade-541086 Clientid:01:52:54:00:2d:88:b9}
	I0229 18:41:40.764172   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | domain kubernetes-upgrade-541086 has defined IP address 192.168.50.47 and MAC address 52:54:00:2d:88:b9 in network mk-kubernetes-upgrade-541086
	I0229 18:41:40.764379   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHPort
	I0229 18:41:40.764569   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHKeyPath
	I0229 18:41:40.764711   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHKeyPath
	I0229 18:41:40.764899   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHUsername
	I0229 18:41:40.765075   37609 main.go:141] libmachine: Using SSH client type: native
	I0229 18:41:40.765246   37609 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.47 22 <nil> <nil>}
	I0229 18:41:40.765259   37609 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0229 18:41:41.056864   37609 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0229 18:41:41.056892   37609 main.go:141] libmachine: Checking connection to Docker...
	I0229 18:41:41.056900   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetURL
	I0229 18:41:41.058363   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | Using libvirt version 6000000
	I0229 18:41:41.060935   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | domain kubernetes-upgrade-541086 has defined MAC address 52:54:00:2d:88:b9 in network mk-kubernetes-upgrade-541086
	I0229 18:41:41.061340   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:88:b9", ip: ""} in network mk-kubernetes-upgrade-541086: {Iface:virbr2 ExpiryTime:2024-02-29 19:41:32 +0000 UTC Type:0 Mac:52:54:00:2d:88:b9 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:kubernetes-upgrade-541086 Clientid:01:52:54:00:2d:88:b9}
	I0229 18:41:41.061376   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | domain kubernetes-upgrade-541086 has defined IP address 192.168.50.47 and MAC address 52:54:00:2d:88:b9 in network mk-kubernetes-upgrade-541086
	I0229 18:41:41.061523   37609 main.go:141] libmachine: Docker is up and running!
	I0229 18:41:41.061538   37609 main.go:141] libmachine: Reticulating splines...
	I0229 18:41:41.061544   37609 client.go:171] LocalClient.Create took 24.508700375s
	I0229 18:41:41.061565   37609 start.go:167] duration metric: libmachine.API.Create for "kubernetes-upgrade-541086" took 24.508769614s
	I0229 18:41:41.061574   37609 start.go:300] post-start starting for "kubernetes-upgrade-541086" (driver="kvm2")
	I0229 18:41:41.061586   37609 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 18:41:41.061607   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .DriverName
	I0229 18:41:41.061841   37609 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 18:41:41.061861   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHHostname
	I0229 18:41:41.064376   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | domain kubernetes-upgrade-541086 has defined MAC address 52:54:00:2d:88:b9 in network mk-kubernetes-upgrade-541086
	I0229 18:41:41.064732   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:88:b9", ip: ""} in network mk-kubernetes-upgrade-541086: {Iface:virbr2 ExpiryTime:2024-02-29 19:41:32 +0000 UTC Type:0 Mac:52:54:00:2d:88:b9 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:kubernetes-upgrade-541086 Clientid:01:52:54:00:2d:88:b9}
	I0229 18:41:41.064762   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | domain kubernetes-upgrade-541086 has defined IP address 192.168.50.47 and MAC address 52:54:00:2d:88:b9 in network mk-kubernetes-upgrade-541086
	I0229 18:41:41.064920   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHPort
	I0229 18:41:41.065107   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHKeyPath
	I0229 18:41:41.065249   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHUsername
	I0229 18:41:41.065384   37609 sshutil.go:53] new ssh client: &{IP:192.168.50.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/kubernetes-upgrade-541086/id_rsa Username:docker}
	I0229 18:41:41.154288   37609 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 18:41:41.160861   37609 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 18:41:41.160885   37609 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6428/.minikube/addons for local assets ...
	I0229 18:41:41.160959   37609 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6428/.minikube/files for local assets ...
	I0229 18:41:41.161046   37609 filesync.go:149] local asset: /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem -> 136512.pem in /etc/ssl/certs
	I0229 18:41:41.161159   37609 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 18:41:41.172089   37609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem --> /etc/ssl/certs/136512.pem (1708 bytes)
	I0229 18:41:41.199479   37609 start.go:303] post-start completed in 137.891775ms
	I0229 18:41:41.199535   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetConfigRaw
	I0229 18:41:41.200267   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetIP
	I0229 18:41:41.203163   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | domain kubernetes-upgrade-541086 has defined MAC address 52:54:00:2d:88:b9 in network mk-kubernetes-upgrade-541086
	I0229 18:41:41.203593   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:88:b9", ip: ""} in network mk-kubernetes-upgrade-541086: {Iface:virbr2 ExpiryTime:2024-02-29 19:41:32 +0000 UTC Type:0 Mac:52:54:00:2d:88:b9 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:kubernetes-upgrade-541086 Clientid:01:52:54:00:2d:88:b9}
	I0229 18:41:41.203623   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | domain kubernetes-upgrade-541086 has defined IP address 192.168.50.47 and MAC address 52:54:00:2d:88:b9 in network mk-kubernetes-upgrade-541086
	I0229 18:41:41.203914   37609 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/kubernetes-upgrade-541086/config.json ...
	I0229 18:41:41.204089   37609 start.go:128] duration metric: createHost completed in 24.671550765s
	I0229 18:41:41.204111   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHHostname
	I0229 18:41:41.206455   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | domain kubernetes-upgrade-541086 has defined MAC address 52:54:00:2d:88:b9 in network mk-kubernetes-upgrade-541086
	I0229 18:41:41.206923   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:88:b9", ip: ""} in network mk-kubernetes-upgrade-541086: {Iface:virbr2 ExpiryTime:2024-02-29 19:41:32 +0000 UTC Type:0 Mac:52:54:00:2d:88:b9 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:kubernetes-upgrade-541086 Clientid:01:52:54:00:2d:88:b9}
	I0229 18:41:41.206952   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | domain kubernetes-upgrade-541086 has defined IP address 192.168.50.47 and MAC address 52:54:00:2d:88:b9 in network mk-kubernetes-upgrade-541086
	I0229 18:41:41.207084   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHPort
	I0229 18:41:41.207281   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHKeyPath
	I0229 18:41:41.207457   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHKeyPath
	I0229 18:41:41.207655   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHUsername
	I0229 18:41:41.207839   37609 main.go:141] libmachine: Using SSH client type: native
	I0229 18:41:41.208034   37609 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.47 22 <nil> <nil>}
	I0229 18:41:41.208046   37609 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0229 18:41:41.320437   37609 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709232101.304902525
	
	I0229 18:41:41.320463   37609 fix.go:206] guest clock: 1709232101.304902525
	I0229 18:41:41.320470   37609 fix.go:219] Guest: 2024-02-29 18:41:41.304902525 +0000 UTC Remote: 2024-02-29 18:41:41.204099836 +0000 UTC m=+51.600488482 (delta=100.802689ms)
	I0229 18:41:41.320501   37609 fix.go:190] guest clock delta is within tolerance: 100.802689ms
	I0229 18:41:41.320508   37609 start.go:83] releasing machines lock for "kubernetes-upgrade-541086", held for 24.788158253s
	I0229 18:41:41.320542   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .DriverName
	I0229 18:41:41.320794   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetIP
	I0229 18:41:41.324950   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | domain kubernetes-upgrade-541086 has defined MAC address 52:54:00:2d:88:b9 in network mk-kubernetes-upgrade-541086
	I0229 18:41:41.324994   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:88:b9", ip: ""} in network mk-kubernetes-upgrade-541086: {Iface:virbr2 ExpiryTime:2024-02-29 19:41:32 +0000 UTC Type:0 Mac:52:54:00:2d:88:b9 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:kubernetes-upgrade-541086 Clientid:01:52:54:00:2d:88:b9}
	I0229 18:41:41.325013   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | domain kubernetes-upgrade-541086 has defined IP address 192.168.50.47 and MAC address 52:54:00:2d:88:b9 in network mk-kubernetes-upgrade-541086
	I0229 18:41:41.324954   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .DriverName
	I0229 18:41:41.325679   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .DriverName
	I0229 18:41:41.325879   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .DriverName
	I0229 18:41:41.325985   37609 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 18:41:41.326030   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHHostname
	I0229 18:41:41.326126   37609 ssh_runner.go:195] Run: cat /version.json
	I0229 18:41:41.326153   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHHostname
	I0229 18:41:41.328907   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | domain kubernetes-upgrade-541086 has defined MAC address 52:54:00:2d:88:b9 in network mk-kubernetes-upgrade-541086
	I0229 18:41:41.329189   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | domain kubernetes-upgrade-541086 has defined MAC address 52:54:00:2d:88:b9 in network mk-kubernetes-upgrade-541086
	I0229 18:41:41.329268   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:88:b9", ip: ""} in network mk-kubernetes-upgrade-541086: {Iface:virbr2 ExpiryTime:2024-02-29 19:41:32 +0000 UTC Type:0 Mac:52:54:00:2d:88:b9 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:kubernetes-upgrade-541086 Clientid:01:52:54:00:2d:88:b9}
	I0229 18:41:41.329292   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | domain kubernetes-upgrade-541086 has defined IP address 192.168.50.47 and MAC address 52:54:00:2d:88:b9 in network mk-kubernetes-upgrade-541086
	I0229 18:41:41.329413   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHPort
	I0229 18:41:41.329578   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:88:b9", ip: ""} in network mk-kubernetes-upgrade-541086: {Iface:virbr2 ExpiryTime:2024-02-29 19:41:32 +0000 UTC Type:0 Mac:52:54:00:2d:88:b9 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:kubernetes-upgrade-541086 Clientid:01:52:54:00:2d:88:b9}
	I0229 18:41:41.329600   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHKeyPath
	I0229 18:41:41.329604   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | domain kubernetes-upgrade-541086 has defined IP address 192.168.50.47 and MAC address 52:54:00:2d:88:b9 in network mk-kubernetes-upgrade-541086
	I0229 18:41:41.329772   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHUsername
	I0229 18:41:41.329777   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHPort
	I0229 18:41:41.329983   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHKeyPath
	I0229 18:41:41.329990   37609 sshutil.go:53] new ssh client: &{IP:192.168.50.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/kubernetes-upgrade-541086/id_rsa Username:docker}
	I0229 18:41:41.330122   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHUsername
	I0229 18:41:41.330255   37609 sshutil.go:53] new ssh client: &{IP:192.168.50.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/kubernetes-upgrade-541086/id_rsa Username:docker}
	I0229 18:41:41.441024   37609 ssh_runner.go:195] Run: systemctl --version
	I0229 18:41:41.448640   37609 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0229 18:41:41.629151   37609 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 18:41:41.636146   37609 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 18:41:41.636215   37609 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 18:41:41.661832   37609 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 18:41:41.661865   37609 start.go:475] detecting cgroup driver to use...
	I0229 18:41:41.661938   37609 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 18:41:41.686011   37609 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 18:41:41.701812   37609 docker.go:217] disabling cri-docker service (if available) ...
	I0229 18:41:41.701903   37609 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 18:41:41.717627   37609 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 18:41:41.733564   37609 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 18:41:41.862787   37609 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 18:41:42.015813   37609 docker.go:233] disabling docker service ...
	I0229 18:41:42.015873   37609 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 18:41:42.031644   37609 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 18:41:42.047314   37609 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 18:41:42.208341   37609 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 18:41:42.364474   37609 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 18:41:42.386380   37609 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 18:41:42.410123   37609 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0229 18:41:42.410184   37609 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:41:42.422104   37609 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0229 18:41:42.422169   37609 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:41:42.434301   37609 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:41:42.446989   37609 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:41:42.459183   37609 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 18:41:42.472382   37609 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 18:41:42.482879   37609 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 18:41:42.482932   37609 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0229 18:41:42.497474   37609 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 18:41:42.512139   37609 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:41:42.648495   37609 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0229 18:41:42.802000   37609 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0229 18:41:42.802062   37609 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0229 18:41:42.807796   37609 start.go:543] Will wait 60s for crictl version
	I0229 18:41:42.807856   37609 ssh_runner.go:195] Run: which crictl
	I0229 18:41:42.812368   37609 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 18:41:42.857643   37609 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0229 18:41:42.857723   37609 ssh_runner.go:195] Run: crio --version
	I0229 18:41:42.890228   37609 ssh_runner.go:195] Run: crio --version
	I0229 18:41:42.923859   37609 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.29.1 ...
	I0229 18:41:42.925291   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetIP
	I0229 18:41:42.928746   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | domain kubernetes-upgrade-541086 has defined MAC address 52:54:00:2d:88:b9 in network mk-kubernetes-upgrade-541086
	I0229 18:41:42.929234   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:88:b9", ip: ""} in network mk-kubernetes-upgrade-541086: {Iface:virbr2 ExpiryTime:2024-02-29 19:41:32 +0000 UTC Type:0 Mac:52:54:00:2d:88:b9 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:kubernetes-upgrade-541086 Clientid:01:52:54:00:2d:88:b9}
	I0229 18:41:42.929276   37609 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | domain kubernetes-upgrade-541086 has defined IP address 192.168.50.47 and MAC address 52:54:00:2d:88:b9 in network mk-kubernetes-upgrade-541086
	I0229 18:41:42.929508   37609 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0229 18:41:42.934676   37609 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:41:42.950153   37609 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0229 18:41:42.950220   37609 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 18:41:42.991161   37609 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0229 18:41:42.991219   37609 ssh_runner.go:195] Run: which lz4
	I0229 18:41:42.996064   37609 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0229 18:41:43.000932   37609 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 18:41:43.000984   37609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0229 18:41:44.913255   37609 crio.go:444] Took 1.917235 seconds to copy over tarball
	I0229 18:41:44.913332   37609 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 18:41:47.772290   37609 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.858912175s)
	I0229 18:41:47.772319   37609 crio.go:451] Took 2.859029 seconds to extract the tarball
	I0229 18:41:47.772330   37609 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 18:41:47.824805   37609 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 18:41:47.886656   37609 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0229 18:41:47.886691   37609 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0229 18:41:47.886765   37609 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:41:47.886794   37609 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 18:41:47.886886   37609 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 18:41:47.887114   37609 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0229 18:41:47.887128   37609 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 18:41:47.887130   37609 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 18:41:47.887250   37609 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0229 18:41:47.887272   37609 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0229 18:41:47.888352   37609 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 18:41:47.888368   37609 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0229 18:41:47.888481   37609 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 18:41:47.888648   37609 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:41:47.888765   37609 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 18:41:47.888801   37609 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0229 18:41:47.888944   37609 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 18:41:47.888955   37609 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0229 18:41:48.118319   37609 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0229 18:41:48.177935   37609 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0229 18:41:48.177989   37609 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 18:41:48.178036   37609 ssh_runner.go:195] Run: which crictl
	I0229 18:41:48.180166   37609 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0229 18:41:48.183584   37609 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0229 18:41:48.186838   37609 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 18:41:48.188733   37609 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0229 18:41:48.190974   37609 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0229 18:41:48.197278   37609 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0229 18:41:48.209137   37609 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0229 18:41:48.406610   37609 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0229 18:41:48.406657   37609 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0229 18:41:48.406705   37609 ssh_runner.go:195] Run: which crictl
	I0229 18:41:48.406832   37609 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0229 18:41:48.406865   37609 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0229 18:41:48.406921   37609 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 18:41:48.406946   37609 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0229 18:41:48.406966   37609 ssh_runner.go:195] Run: which crictl
	I0229 18:41:48.406975   37609 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0229 18:41:48.407006   37609 ssh_runner.go:195] Run: which crictl
	I0229 18:41:48.435135   37609 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0229 18:41:48.435245   37609 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 18:41:48.435331   37609 ssh_runner.go:195] Run: which crictl
	I0229 18:41:48.438987   37609 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0229 18:41:48.439038   37609 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0229 18:41:48.439048   37609 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0229 18:41:48.439076   37609 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 18:41:48.439087   37609 ssh_runner.go:195] Run: which crictl
	I0229 18:41:48.439116   37609 ssh_runner.go:195] Run: which crictl
	I0229 18:41:48.439211   37609 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 18:41:48.439176   37609 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0229 18:41:48.439270   37609 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0229 18:41:48.445906   37609 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0229 18:41:48.446035   37609 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0229 18:41:48.463628   37609 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0229 18:41:48.639036   37609 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0229 18:41:48.639082   37609 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0229 18:41:48.639181   37609 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0229 18:41:48.645735   37609 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0229 18:41:48.645789   37609 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0229 18:41:48.645839   37609 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0229 18:41:48.745156   37609 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:41:48.896303   37609 cache_images.go:92] LoadImages completed in 1.009591426s
	W0229 18:41:48.896392   37609 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0: no such file or directory
	I0229 18:41:48.896505   37609 ssh_runner.go:195] Run: crio config
	I0229 18:41:48.971711   37609 cni.go:84] Creating CNI manager for ""
	I0229 18:41:48.971740   37609 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 18:41:48.971790   37609 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 18:41:48.971814   37609 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.47 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-541086 NodeName:kubernetes-upgrade-541086 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.47"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.47 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0229 18:41:48.971971   37609 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.47
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-541086"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.47
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.47"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: kubernetes-upgrade-541086
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.50.47:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 18:41:48.972060   37609 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-541086 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.47
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-541086 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 18:41:48.972123   37609 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0229 18:41:48.986977   37609 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 18:41:48.987092   37609 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 18:41:48.999084   37609 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I0229 18:41:49.018937   37609 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 18:41:49.038975   37609 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2183 bytes)
	I0229 18:41:49.062072   37609 ssh_runner.go:195] Run: grep 192.168.50.47	control-plane.minikube.internal$ /etc/hosts
	I0229 18:41:49.067698   37609 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.47	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:41:49.085105   37609 certs.go:56] Setting up /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/kubernetes-upgrade-541086 for IP: 192.168.50.47
	I0229 18:41:49.085148   37609 certs.go:190] acquiring lock for shared ca certs: {Name:mk3505d2468da66ac20dc3ebf913782cecf1ba0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:41:49.085318   37609 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18259-6428/.minikube/ca.key
	I0229 18:41:49.085360   37609 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.key
	I0229 18:41:49.085399   37609 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/kubernetes-upgrade-541086/client.key
	I0229 18:41:49.085412   37609 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/kubernetes-upgrade-541086/client.crt with IP's: []
	I0229 18:41:49.482493   37609 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/kubernetes-upgrade-541086/client.crt ...
	I0229 18:41:49.482523   37609 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/kubernetes-upgrade-541086/client.crt: {Name:mk89176d277f4c45e52bc10a858b53c9a4984838 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:41:49.482700   37609 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/kubernetes-upgrade-541086/client.key ...
	I0229 18:41:49.482716   37609 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/kubernetes-upgrade-541086/client.key: {Name:mkcf2cb0a1dc870c454ed95d02f81e6d513eb545 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:41:49.482816   37609 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/kubernetes-upgrade-541086/apiserver.key.6a3aec60
	I0229 18:41:49.482837   37609 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/kubernetes-upgrade-541086/apiserver.crt.6a3aec60 with IP's: [192.168.50.47 10.96.0.1 127.0.0.1 10.0.0.1]
	I0229 18:41:49.540060   37609 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/kubernetes-upgrade-541086/apiserver.crt.6a3aec60 ...
	I0229 18:41:49.540090   37609 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/kubernetes-upgrade-541086/apiserver.crt.6a3aec60: {Name:mk352e2fb2107f1d84e486a7b12cf6754e00d5c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:41:49.573632   37609 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/kubernetes-upgrade-541086/apiserver.key.6a3aec60 ...
	I0229 18:41:49.573671   37609 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/kubernetes-upgrade-541086/apiserver.key.6a3aec60: {Name:mkdb667d9f8c34a5ac79e0b6c2bbcfe824b9dab3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:41:49.573807   37609 certs.go:337] copying /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/kubernetes-upgrade-541086/apiserver.crt.6a3aec60 -> /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/kubernetes-upgrade-541086/apiserver.crt
	I0229 18:41:49.573914   37609 certs.go:341] copying /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/kubernetes-upgrade-541086/apiserver.key.6a3aec60 -> /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/kubernetes-upgrade-541086/apiserver.key
	I0229 18:41:49.573993   37609 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/kubernetes-upgrade-541086/proxy-client.key
	I0229 18:41:49.574013   37609 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/kubernetes-upgrade-541086/proxy-client.crt with IP's: []
	I0229 18:41:49.734169   37609 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/kubernetes-upgrade-541086/proxy-client.crt ...
	I0229 18:41:49.821315   37609 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/kubernetes-upgrade-541086/proxy-client.crt: {Name:mk84af6c09955c9187d556ff55968b4366f00087 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:41:49.821501   37609 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/kubernetes-upgrade-541086/proxy-client.key ...
	I0229 18:41:49.821519   37609 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/kubernetes-upgrade-541086/proxy-client.key: {Name:mkf6f2892dfded0e8d3eb40900bb0df7a7bbe90e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:41:49.821714   37609 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651.pem (1338 bytes)
	W0229 18:41:49.821765   37609 certs.go:433] ignoring /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651_empty.pem, impossibly tiny 0 bytes
	I0229 18:41:49.821781   37609 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem (1675 bytes)
	I0229 18:41:49.821817   37609 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem (1082 bytes)
	I0229 18:41:49.821879   37609 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem (1123 bytes)
	I0229 18:41:49.821917   37609 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem (1675 bytes)
	I0229 18:41:49.821981   37609 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem (1708 bytes)
	I0229 18:41:49.822577   37609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/kubernetes-upgrade-541086/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 18:41:49.853685   37609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/kubernetes-upgrade-541086/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0229 18:41:49.884705   37609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/kubernetes-upgrade-541086/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 18:41:49.913020   37609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/kubernetes-upgrade-541086/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0229 18:41:49.943226   37609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 18:41:49.974833   37609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0229 18:41:50.007842   37609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 18:41:50.037200   37609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0229 18:41:50.066628   37609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem --> /usr/share/ca-certificates/136512.pem (1708 bytes)
	I0229 18:41:50.095671   37609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 18:41:50.131651   37609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651.pem --> /usr/share/ca-certificates/13651.pem (1338 bytes)
	I0229 18:41:50.168688   37609 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 18:41:50.196329   37609 ssh_runner.go:195] Run: openssl version
	I0229 18:41:50.205350   37609 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136512.pem && ln -fs /usr/share/ca-certificates/136512.pem /etc/ssl/certs/136512.pem"
	I0229 18:41:50.220847   37609 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136512.pem
	I0229 18:41:50.226581   37609 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 17:49 /usr/share/ca-certificates/136512.pem
	I0229 18:41:50.226642   37609 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136512.pem
	I0229 18:41:50.233412   37609 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136512.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 18:41:50.246471   37609 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 18:41:50.259791   37609 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:41:50.265549   37609 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 17:40 /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:41:50.265624   37609 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:41:50.274516   37609 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 18:41:50.292121   37609 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13651.pem && ln -fs /usr/share/ca-certificates/13651.pem /etc/ssl/certs/13651.pem"
	I0229 18:41:50.308118   37609 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13651.pem
	I0229 18:41:50.313521   37609 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 17:49 /usr/share/ca-certificates/13651.pem
	I0229 18:41:50.313597   37609 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13651.pem
	I0229 18:41:50.320212   37609 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13651.pem /etc/ssl/certs/51391683.0"
	I0229 18:41:50.333448   37609 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 18:41:50.340187   37609 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 18:41:50.340253   37609 kubeadm.go:404] StartCluster: {Name:kubernetes-upgrade-541086 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.16.0 ClusterName:kubernetes-upgrade-541086 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.47 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:41:50.340345   37609 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0229 18:41:50.340408   37609 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 18:41:50.388871   37609 cri.go:89] found id: ""
	I0229 18:41:50.388948   37609 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 18:41:50.400506   37609 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 18:41:50.412462   37609 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 18:41:50.427351   37609 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 18:41:50.427398   37609 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 18:41:50.810255   37609 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 18:43:48.505726   37609 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 18:43:48.505926   37609 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 18:43:48.507164   37609 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 18:43:48.507266   37609 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 18:43:48.507390   37609 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 18:43:48.507581   37609 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 18:43:48.507783   37609 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 18:43:48.508008   37609 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 18:43:48.508284   37609 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 18:43:48.508569   37609 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 18:43:48.508710   37609 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 18:43:48.510345   37609 out.go:204]   - Generating certificates and keys ...
	I0229 18:43:48.510411   37609 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 18:43:48.510467   37609 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 18:43:48.510540   37609 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0229 18:43:48.510598   37609 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0229 18:43:48.510654   37609 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0229 18:43:48.510710   37609 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0229 18:43:48.510766   37609 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0229 18:43:48.510887   37609 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-541086 localhost] and IPs [192.168.50.47 127.0.0.1 ::1]
	I0229 18:43:48.510934   37609 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0229 18:43:48.511081   37609 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-541086 localhost] and IPs [192.168.50.47 127.0.0.1 ::1]
	I0229 18:43:48.511196   37609 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0229 18:43:48.511312   37609 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0229 18:43:48.511359   37609 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0229 18:43:48.511410   37609 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 18:43:48.511456   37609 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 18:43:48.511505   37609 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 18:43:48.511558   37609 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 18:43:48.511603   37609 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 18:43:48.511658   37609 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 18:43:48.512924   37609 out.go:204]   - Booting up control plane ...
	I0229 18:43:48.512998   37609 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 18:43:48.513092   37609 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 18:43:48.513176   37609 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 18:43:48.513259   37609 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 18:43:48.513394   37609 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 18:43:48.513445   37609 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 18:43:48.513502   37609 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:43:48.513733   37609 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:43:48.513803   37609 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:43:48.513964   37609 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:43:48.514024   37609 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:43:48.514201   37609 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:43:48.514284   37609 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:43:48.514449   37609 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:43:48.514508   37609 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:43:48.514660   37609 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:43:48.514667   37609 kubeadm.go:322] 
	I0229 18:43:48.514699   37609 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 18:43:48.514733   37609 kubeadm.go:322] 	timed out waiting for the condition
	I0229 18:43:48.514739   37609 kubeadm.go:322] 
	I0229 18:43:48.514767   37609 kubeadm.go:322] This error is likely caused by:
	I0229 18:43:48.514798   37609 kubeadm.go:322] 	- The kubelet is not running
	I0229 18:43:48.514918   37609 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 18:43:48.514928   37609 kubeadm.go:322] 
	I0229 18:43:48.515036   37609 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 18:43:48.515083   37609 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 18:43:48.515123   37609 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 18:43:48.515130   37609 kubeadm.go:322] 
	I0229 18:43:48.515223   37609 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 18:43:48.515353   37609 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 18:43:48.515426   37609 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 18:43:48.515472   37609 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 18:43:48.515538   37609 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 18:43:48.515622   37609 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	W0229 18:43:48.515665   37609 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-541086 localhost] and IPs [192.168.50.47 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-541086 localhost] and IPs [192.168.50.47 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-541086 localhost] and IPs [192.168.50.47 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-541086 localhost] and IPs [192.168.50.47 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0229 18:43:48.515708   37609 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0229 18:43:48.989988   37609 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 18:43:49.005475   37609 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 18:43:49.016674   37609 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 18:43:49.016718   37609 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 18:43:49.206865   37609 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 18:45:45.571311   37609 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 18:45:45.571445   37609 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 18:45:45.573109   37609 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 18:45:45.573168   37609 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 18:45:45.573272   37609 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 18:45:45.573427   37609 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 18:45:45.573564   37609 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 18:45:45.573698   37609 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 18:45:45.573816   37609 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 18:45:45.573886   37609 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 18:45:45.573974   37609 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 18:45:45.575853   37609 out.go:204]   - Generating certificates and keys ...
	I0229 18:45:45.575952   37609 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 18:45:45.576016   37609 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 18:45:45.576118   37609 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 18:45:45.576206   37609 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 18:45:45.576296   37609 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 18:45:45.576366   37609 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 18:45:45.576459   37609 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 18:45:45.576543   37609 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 18:45:45.576616   37609 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 18:45:45.576707   37609 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 18:45:45.576759   37609 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 18:45:45.576829   37609 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 18:45:45.576900   37609 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 18:45:45.576987   37609 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 18:45:45.577064   37609 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 18:45:45.577135   37609 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 18:45:45.577228   37609 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 18:45:45.579039   37609 out.go:204]   - Booting up control plane ...
	I0229 18:45:45.579153   37609 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 18:45:45.579260   37609 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 18:45:45.579350   37609 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 18:45:45.579492   37609 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 18:45:45.579719   37609 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 18:45:45.579799   37609 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 18:45:45.579892   37609 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:45:45.580137   37609 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:45:45.580229   37609 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:45:45.580473   37609 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:45:45.580579   37609 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:45:45.580844   37609 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:45:45.580955   37609 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:45:45.581214   37609 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:45:45.581317   37609 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:45:45.581575   37609 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:45:45.581583   37609 kubeadm.go:322] 
	I0229 18:45:45.581632   37609 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 18:45:45.581688   37609 kubeadm.go:322] 	timed out waiting for the condition
	I0229 18:45:45.581697   37609 kubeadm.go:322] 
	I0229 18:45:45.581743   37609 kubeadm.go:322] This error is likely caused by:
	I0229 18:45:45.581789   37609 kubeadm.go:322] 	- The kubelet is not running
	I0229 18:45:45.581925   37609 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 18:45:45.581935   37609 kubeadm.go:322] 
	I0229 18:45:45.582067   37609 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 18:45:45.582113   37609 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 18:45:45.582156   37609 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 18:45:45.582165   37609 kubeadm.go:322] 
	I0229 18:45:45.582296   37609 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 18:45:45.582422   37609 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 18:45:45.582541   37609 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 18:45:45.582600   37609 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 18:45:45.582695   37609 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 18:45:45.582811   37609 kubeadm.go:406] StartCluster complete in 3m55.242564685s
	I0229 18:45:45.582850   37609 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:45:45.582917   37609 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:45:45.582994   37609 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0229 18:45:45.642084   37609 cri.go:89] found id: ""
	I0229 18:45:45.642117   37609 logs.go:276] 0 containers: []
	W0229 18:45:45.642129   37609 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:45:45.642137   37609 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:45:45.642207   37609 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:45:45.685245   37609 cri.go:89] found id: ""
	I0229 18:45:45.685282   37609 logs.go:276] 0 containers: []
	W0229 18:45:45.685295   37609 logs.go:278] No container was found matching "etcd"
	I0229 18:45:45.685302   37609 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:45:45.685364   37609 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:45:45.735525   37609 cri.go:89] found id: ""
	I0229 18:45:45.735552   37609 logs.go:276] 0 containers: []
	W0229 18:45:45.735559   37609 logs.go:278] No container was found matching "coredns"
	I0229 18:45:45.735565   37609 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:45:45.735619   37609 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:45:45.786485   37609 cri.go:89] found id: ""
	I0229 18:45:45.786518   37609 logs.go:276] 0 containers: []
	W0229 18:45:45.786531   37609 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:45:45.786539   37609 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:45:45.786617   37609 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:45:45.837950   37609 cri.go:89] found id: ""
	I0229 18:45:45.837987   37609 logs.go:276] 0 containers: []
	W0229 18:45:45.838000   37609 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:45:45.838008   37609 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:45:45.838081   37609 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:45:45.880446   37609 cri.go:89] found id: ""
	I0229 18:45:45.880475   37609 logs.go:276] 0 containers: []
	W0229 18:45:45.880486   37609 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:45:45.880493   37609 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:45:45.880547   37609 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:45:45.926518   37609 cri.go:89] found id: ""
	I0229 18:45:45.926546   37609 logs.go:276] 0 containers: []
	W0229 18:45:45.926556   37609 logs.go:278] No container was found matching "kindnet"
	I0229 18:45:45.926567   37609 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:45:45.926585   37609 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:45:46.078463   37609 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:45:46.078490   37609 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:45:46.078505   37609 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:45:46.229242   37609 logs.go:123] Gathering logs for container status ...
	I0229 18:45:46.229290   37609 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:45:46.279575   37609 logs.go:123] Gathering logs for kubelet ...
	I0229 18:45:46.279605   37609 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:45:46.339581   37609 logs.go:123] Gathering logs for dmesg ...
	I0229 18:45:46.339613   37609 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0229 18:45:46.360845   37609 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0229 18:45:46.360887   37609 out.go:239] * 
	* 
	W0229 18:45:46.360942   37609 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 18:45:46.360975   37609 out.go:239] * 
	* 
	W0229 18:45:46.361881   37609 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 18:45:46.365742   37609 out.go:177] 
	W0229 18:45:46.366978   37609 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 18:45:46.367039   37609 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0229 18:45:46.367063   37609 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0229 18:45:46.368816   37609 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-541086 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-541086
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-541086: (2.404738633s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-541086 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-541086 status --format={{.Host}}: exit status 7 (86.514729ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-541086 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-541086 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m17.666888977s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-541086 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-541086 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-541086 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=crio: exit status 106 (107.794989ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-541086] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18259
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18259-6428/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6428/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-541086
	    minikube start -p kubernetes-upgrade-541086 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5410862 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-541086 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-541086 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0229 18:47:29.716671   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/functional-531072/client.crt: no such file or directory
E0229 18:47:43.785600   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/addons-848237/client.crt: no such file or directory
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-541086 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (37.740104828s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-02-29 18:47:44.497455 +0000 UTC m=+4221.992438774
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-541086 -n kubernetes-upgrade-541086
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-541086 logs -n 25
E0229 18:47:46.663488   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/functional-531072/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-541086 logs -n 25: (1.971343911s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                  Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-587185 sudo                  | cilium-587185             | jenkins | v1.32.0 | 29 Feb 24 18:44 UTC |                     |
	|         | systemctl status containerd            |                           |         |         |                     |                     |
	|         | --all --full --no-pager                |                           |         |         |                     |                     |
	| ssh     | -p cilium-587185 sudo                  | cilium-587185             | jenkins | v1.32.0 | 29 Feb 24 18:44 UTC |                     |
	|         | systemctl cat containerd               |                           |         |         |                     |                     |
	|         | --no-pager                             |                           |         |         |                     |                     |
	| ssh     | -p cilium-587185 sudo cat              | cilium-587185             | jenkins | v1.32.0 | 29 Feb 24 18:44 UTC |                     |
	|         | /lib/systemd/system/containerd.service |                           |         |         |                     |                     |
	| ssh     | -p cilium-587185 sudo cat              | cilium-587185             | jenkins | v1.32.0 | 29 Feb 24 18:44 UTC |                     |
	|         | /etc/containerd/config.toml            |                           |         |         |                     |                     |
	| ssh     | -p cilium-587185 sudo                  | cilium-587185             | jenkins | v1.32.0 | 29 Feb 24 18:44 UTC |                     |
	|         | containerd config dump                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-587185 sudo                  | cilium-587185             | jenkins | v1.32.0 | 29 Feb 24 18:44 UTC |                     |
	|         | systemctl status crio --all            |                           |         |         |                     |                     |
	|         | --full --no-pager                      |                           |         |         |                     |                     |
	| ssh     | -p cilium-587185 sudo                  | cilium-587185             | jenkins | v1.32.0 | 29 Feb 24 18:44 UTC |                     |
	|         | systemctl cat crio --no-pager          |                           |         |         |                     |                     |
	| ssh     | -p cilium-587185 sudo find             | cilium-587185             | jenkins | v1.32.0 | 29 Feb 24 18:44 UTC |                     |
	|         | /etc/crio -type f -exec sh -c          |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                   |                           |         |         |                     |                     |
	| ssh     | -p cilium-587185 sudo crio             | cilium-587185             | jenkins | v1.32.0 | 29 Feb 24 18:44 UTC |                     |
	|         | config                                 |                           |         |         |                     |                     |
	| delete  | -p cilium-587185                       | cilium-587185             | jenkins | v1.32.0 | 29 Feb 24 18:44 UTC | 29 Feb 24 18:44 UTC |
	| start   | -p pause-848791 --memory=2048          | pause-848791              | jenkins | v1.32.0 | 29 Feb 24 18:44 UTC | 29 Feb 24 18:46 UTC |
	|         | --install-addons=false                 |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2               |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-588905            | force-systemd-env-588905  | jenkins | v1.32.0 | 29 Feb 24 18:45 UTC | 29 Feb 24 18:45 UTC |
	| start   | -p cert-expiration-393248              | cert-expiration-393248    | jenkins | v1.32.0 | 29 Feb 24 18:45 UTC | 29 Feb 24 18:46 UTC |
	|         | --memory=2048                          |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                   |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-297898 ssh cat      | force-systemd-flag-297898 | jenkins | v1.32.0 | 29 Feb 24 18:45 UTC | 29 Feb 24 18:45 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-297898           | force-systemd-flag-297898 | jenkins | v1.32.0 | 29 Feb 24 18:45 UTC | 29 Feb 24 18:45 UTC |
	| start   | -p cert-options-009676                 | cert-options-009676       | jenkins | v1.32.0 | 29 Feb 24 18:45 UTC | 29 Feb 24 18:46 UTC |
	|         | --memory=2048                          |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1              |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15          |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost            |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com       |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-541086           | kubernetes-upgrade-541086 | jenkins | v1.32.0 | 29 Feb 24 18:45 UTC | 29 Feb 24 18:45 UTC |
	| start   | -p kubernetes-upgrade-541086           | kubernetes-upgrade-541086 | jenkins | v1.32.0 | 29 Feb 24 18:45 UTC | 29 Feb 24 18:47 UTC |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2      |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| ssh     | cert-options-009676 ssh                | cert-options-009676       | jenkins | v1.32.0 | 29 Feb 24 18:46 UTC | 29 Feb 24 18:46 UTC |
	|         | openssl x509 -text -noout -in          |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt  |                           |         |         |                     |                     |
	| ssh     | -p cert-options-009676 -- sudo         | cert-options-009676       | jenkins | v1.32.0 | 29 Feb 24 18:46 UTC | 29 Feb 24 18:46 UTC |
	|         | cat /etc/kubernetes/admin.conf         |                           |         |         |                     |                     |
	| delete  | -p cert-options-009676                 | cert-options-009676       | jenkins | v1.32.0 | 29 Feb 24 18:46 UTC | 29 Feb 24 18:46 UTC |
	| start   | -p old-k8s-version-631080              | old-k8s-version-631080    | jenkins | v1.32.0 | 29 Feb 24 18:46 UTC |                     |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true          |                           |         |         |                     |                     |
	|         | --kvm-network=default                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                |                           |         |         |                     |                     |
	|         | --keep-context=false                   |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0           |                           |         |         |                     |                     |
	| start   | -p pause-848791                        | pause-848791              | jenkins | v1.32.0 | 29 Feb 24 18:46 UTC |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-541086           | kubernetes-upgrade-541086 | jenkins | v1.32.0 | 29 Feb 24 18:47 UTC |                     |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0           |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-541086           | kubernetes-upgrade-541086 | jenkins | v1.32.0 | 29 Feb 24 18:47 UTC | 29 Feb 24 18:47 UTC |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2      |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 18:47:06
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 18:47:06.812195   44658 out.go:291] Setting OutFile to fd 1 ...
	I0229 18:47:06.812322   44658 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:47:06.812330   44658 out.go:304] Setting ErrFile to fd 2...
	I0229 18:47:06.812334   44658 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:47:06.813109   44658 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6428/.minikube/bin
	I0229 18:47:06.813953   44658 out.go:298] Setting JSON to false
	I0229 18:47:06.815515   44658 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5371,"bootTime":1709227056,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 18:47:06.815583   44658 start.go:139] virtualization: kvm guest
	I0229 18:47:06.817215   44658 out.go:177] * [kubernetes-upgrade-541086] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 18:47:06.818916   44658 out.go:177]   - MINIKUBE_LOCATION=18259
	I0229 18:47:06.820207   44658 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 18:47:06.818938   44658 notify.go:220] Checking for updates...
	I0229 18:47:06.821646   44658 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18259-6428/kubeconfig
	I0229 18:47:06.823287   44658 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6428/.minikube
	I0229 18:47:06.824602   44658 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 18:47:06.825693   44658 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 18:47:06.827318   44658 config.go:182] Loaded profile config "kubernetes-upgrade-541086": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0229 18:47:06.827938   44658 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 18:47:06.827988   44658 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:47:06.843384   44658 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44121
	I0229 18:47:06.843778   44658 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:47:06.844332   44658 main.go:141] libmachine: Using API Version  1
	I0229 18:47:06.844372   44658 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:47:06.844666   44658 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:47:06.844859   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .DriverName
	I0229 18:47:06.845078   44658 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 18:47:06.845354   44658 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 18:47:06.845388   44658 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:47:06.861158   44658 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39287
	I0229 18:47:06.861672   44658 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:47:06.862142   44658 main.go:141] libmachine: Using API Version  1
	I0229 18:47:06.862167   44658 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:47:06.862612   44658 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:47:06.862821   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .DriverName
	I0229 18:47:06.896790   44658 out.go:177] * Using the kvm2 driver based on existing profile
	I0229 18:47:06.898037   44658 start.go:299] selected driver: kvm2
	I0229 18:47:06.898055   44658 start.go:903] validating driver "kvm2" against &{Name:kubernetes-upgrade-541086 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-541086 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.47 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:47:06.898149   44658 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 18:47:06.898855   44658 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:47:06.898931   44658 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18259-6428/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 18:47:06.915787   44658 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 18:47:06.916334   44658 cni.go:84] Creating CNI manager for ""
	I0229 18:47:06.916358   44658 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 18:47:06.916371   44658 start_flags.go:323] config:
	{Name:kubernetes-upgrade-541086 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-541086
Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.47 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:47:06.916585   44658 iso.go:125] acquiring lock: {Name:mk4b55cee5696fff78a1389276e4e011ad655e56 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:47:06.918370   44658 out.go:177] * Starting control plane node kubernetes-upgrade-541086 in cluster kubernetes-upgrade-541086
	I0229 18:47:05.068142   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:05.068756   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:47:05.068790   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:47:05.068705   44421 retry.go:31] will retry after 2.541472609s: waiting for machine to come up
	I0229 18:47:07.612579   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:07.613127   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:47:07.613150   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:47:07.613071   44421 retry.go:31] will retry after 2.349373813s: waiting for machine to come up
	I0229 18:47:06.919505   44658 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0229 18:47:06.919568   44658 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0229 18:47:06.919582   44658 cache.go:56] Caching tarball of preloaded images
	I0229 18:47:06.919676   44658 preload.go:174] Found /home/jenkins/minikube-integration/18259-6428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0229 18:47:06.919688   44658 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on crio
	I0229 18:47:06.919794   44658 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/kubernetes-upgrade-541086/config.json ...
	I0229 18:47:06.920021   44658 start.go:365] acquiring machines lock for kubernetes-upgrade-541086: {Name:mk08b242c6b6b7cd4a6843a7d3821a8b435540dd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 18:47:09.963760   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:09.964188   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:47:09.964220   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:47:09.964150   44421 retry.go:31] will retry after 3.751562898s: waiting for machine to come up
	I0229 18:47:13.716793   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:13.717271   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:47:13.717296   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:47:13.717225   44421 retry.go:31] will retry after 4.503795972s: waiting for machine to come up
	I0229 18:47:18.224043   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:18.224566   44399 main.go:141] libmachine: (old-k8s-version-631080) Found IP for machine: 192.168.83.214
	I0229 18:47:18.224591   44399 main.go:141] libmachine: (old-k8s-version-631080) Reserving static IP address...
	I0229 18:47:18.224626   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has current primary IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:18.224908   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-631080", mac: "52:54:00:1b:b2:7e", ip: "192.168.83.214"} in network mk-old-k8s-version-631080
	I0229 18:47:18.299959   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | Getting to WaitForSSH function...
	I0229 18:47:18.299985   44399 main.go:141] libmachine: (old-k8s-version-631080) Reserved static IP address: 192.168.83.214
	I0229 18:47:18.299997   44399 main.go:141] libmachine: (old-k8s-version-631080) Waiting for SSH to be available...
	I0229 18:47:18.302466   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:18.302909   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:47:10 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:minikube Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:47:18.302938   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:18.303174   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | Using SSH client type: external
	I0229 18:47:18.303195   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | Using SSH private key: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/old-k8s-version-631080/id_rsa (-rw-------)
	I0229 18:47:18.303223   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.214 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18259-6428/.minikube/machines/old-k8s-version-631080/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 18:47:18.303239   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | About to run SSH command:
	I0229 18:47:18.303252   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | exit 0
	I0229 18:47:18.435600   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | SSH cmd err, output: <nil>: 
	I0229 18:47:18.435889   44399 main.go:141] libmachine: (old-k8s-version-631080) KVM machine creation complete!
	I0229 18:47:18.436301   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetConfigRaw
	I0229 18:47:18.436823   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .DriverName
	I0229 18:47:18.437032   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .DriverName
	I0229 18:47:18.437207   44399 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0229 18:47:18.437223   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetState
	I0229 18:47:18.438569   44399 main.go:141] libmachine: Detecting operating system of created instance...
	I0229 18:47:18.438585   44399 main.go:141] libmachine: Waiting for SSH to be available...
	I0229 18:47:18.438592   44399 main.go:141] libmachine: Getting to WaitForSSH function...
	I0229 18:47:18.438600   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHHostname
	I0229 18:47:18.441240   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:18.441690   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:47:10 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:47:18.441713   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:18.441907   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHPort
	I0229 18:47:18.442110   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:47:18.442258   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:47:18.442413   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHUsername
	I0229 18:47:18.442570   44399 main.go:141] libmachine: Using SSH client type: native
	I0229 18:47:18.442821   44399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.83.214 22 <nil> <nil>}
	I0229 18:47:18.442838   44399 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0229 18:47:18.559059   44399 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 18:47:18.559084   44399 main.go:141] libmachine: Detecting the provisioner...
	I0229 18:47:18.559092   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHHostname
	I0229 18:47:18.562247   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:18.562660   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:47:10 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:47:18.562681   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:18.562845   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHPort
	I0229 18:47:18.563079   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:47:18.563293   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:47:18.563479   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHUsername
	I0229 18:47:18.563662   44399 main.go:141] libmachine: Using SSH client type: native
	I0229 18:47:18.563827   44399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.83.214 22 <nil> <nil>}
	I0229 18:47:18.563839   44399 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0229 18:47:18.684893   44399 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0229 18:47:18.685033   44399 main.go:141] libmachine: found compatible host: buildroot
	I0229 18:47:18.685088   44399 main.go:141] libmachine: Provisioning with buildroot...
	I0229 18:47:18.685106   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetMachineName
	I0229 18:47:18.685329   44399 buildroot.go:166] provisioning hostname "old-k8s-version-631080"
	I0229 18:47:18.685364   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetMachineName
	I0229 18:47:18.685505   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHHostname
	I0229 18:47:18.688396   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:18.688727   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:47:10 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:47:18.688757   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:18.688831   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHPort
	I0229 18:47:18.689033   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:47:18.689194   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:47:18.689320   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHUsername
	I0229 18:47:18.689520   44399 main.go:141] libmachine: Using SSH client type: native
	I0229 18:47:18.689679   44399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.83.214 22 <nil> <nil>}
	I0229 18:47:18.689690   44399 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-631080 && echo "old-k8s-version-631080" | sudo tee /etc/hostname
	I0229 18:47:18.826596   44399 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-631080
	
	I0229 18:47:18.826630   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHHostname
	I0229 18:47:18.829407   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:18.829838   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:47:10 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:47:18.829880   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:18.830077   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHPort
	I0229 18:47:18.830293   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:47:18.830513   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:47:18.830680   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHUsername
	I0229 18:47:18.830942   44399 main.go:141] libmachine: Using SSH client type: native
	I0229 18:47:18.831174   44399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.83.214 22 <nil> <nil>}
	I0229 18:47:18.831198   44399 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-631080' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-631080/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-631080' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 18:47:18.953906   44399 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 18:47:18.953936   44399 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18259-6428/.minikube CaCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18259-6428/.minikube}
	I0229 18:47:18.953994   44399 buildroot.go:174] setting up certificates
	I0229 18:47:18.954012   44399 provision.go:83] configureAuth start
	I0229 18:47:18.954031   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetMachineName
	I0229 18:47:18.954333   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetIP
	I0229 18:47:18.957242   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:18.957560   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:47:10 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:47:18.957588   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:18.957716   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHHostname
	I0229 18:47:18.960304   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:18.960631   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:47:10 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:47:18.960659   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:18.960797   44399 provision.go:138] copyHostCerts
	I0229 18:47:18.960845   44399 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem, removing ...
	I0229 18:47:18.960861   44399 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem
	I0229 18:47:18.960903   44399 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem (1082 bytes)
	I0229 18:47:18.960994   44399 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem, removing ...
	I0229 18:47:18.961001   44399 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem
	I0229 18:47:18.961021   44399 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem (1123 bytes)
	I0229 18:47:18.961081   44399 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem, removing ...
	I0229 18:47:18.961088   44399 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem
	I0229 18:47:18.961112   44399 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem (1675 bytes)
	I0229 18:47:18.961194   44399 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-631080 san=[192.168.83.214 192.168.83.214 localhost 127.0.0.1 minikube old-k8s-version-631080]
	I0229 18:47:19.135550   44399 provision.go:172] copyRemoteCerts
	I0229 18:47:19.135634   44399 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 18:47:19.135662   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHHostname
	I0229 18:47:19.138560   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:19.138941   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:47:10 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:47:19.138972   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:19.139225   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHPort
	I0229 18:47:19.139414   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:47:19.139616   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHUsername
	I0229 18:47:19.139792   44399 sshutil.go:53] new ssh client: &{IP:192.168.83.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/old-k8s-version-631080/id_rsa Username:docker}
	I0229 18:47:19.231908   44399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 18:47:19.264773   44399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0229 18:47:19.298215   44399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0229 18:47:19.331349   44399 provision.go:86] duration metric: configureAuth took 377.321285ms
	I0229 18:47:19.331380   44399 buildroot.go:189] setting minikube options for container-runtime
	I0229 18:47:19.331565   44399 config.go:182] Loaded profile config "old-k8s-version-631080": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0229 18:47:19.331649   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHHostname
	I0229 18:47:19.334726   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:19.335015   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:47:10 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:47:19.335082   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:19.335285   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHPort
	I0229 18:47:19.335487   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:47:19.335675   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:47:19.335852   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHUsername
	I0229 18:47:19.336026   44399 main.go:141] libmachine: Using SSH client type: native
	I0229 18:47:19.336181   44399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.83.214 22 <nil> <nil>}
	I0229 18:47:19.336198   44399 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0229 18:47:19.960432   44536 start.go:369] acquired machines lock for "pause-848791" in 20.866650965s
	I0229 18:47:19.960492   44536 start.go:96] Skipping create...Using existing machine configuration
	I0229 18:47:19.960498   44536 fix.go:54] fixHost starting: 
	I0229 18:47:19.960919   44536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 18:47:19.960967   44536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:47:19.980199   44536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41409
	I0229 18:47:19.980728   44536 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:47:19.981245   44536 main.go:141] libmachine: Using API Version  1
	I0229 18:47:19.981269   44536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:47:19.981688   44536 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:47:19.981876   44536 main.go:141] libmachine: (pause-848791) Calling .DriverName
	I0229 18:47:19.982020   44536 main.go:141] libmachine: (pause-848791) Calling .GetState
	I0229 18:47:19.983642   44536 fix.go:102] recreateIfNeeded on pause-848791: state=Running err=<nil>
	W0229 18:47:19.983664   44536 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 18:47:19.985537   44536 out.go:177] * Updating the running kvm2 "pause-848791" VM ...
	I0229 18:47:19.672748   44399 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0229 18:47:19.672796   44399 main.go:141] libmachine: Checking connection to Docker...
	I0229 18:47:19.672807   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetURL
	I0229 18:47:19.674145   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | Using libvirt version 6000000
	I0229 18:47:19.676856   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:19.677213   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:47:10 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:47:19.677245   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:19.677465   44399 main.go:141] libmachine: Docker is up and running!
	I0229 18:47:19.677482   44399 main.go:141] libmachine: Reticulating splines...
	I0229 18:47:19.677490   44399 client.go:171] LocalClient.Create took 25.205493908s
	I0229 18:47:19.677520   44399 start.go:167] duration metric: libmachine.API.Create for "old-k8s-version-631080" took 25.205561905s
	I0229 18:47:19.677553   44399 start.go:300] post-start starting for "old-k8s-version-631080" (driver="kvm2")
	I0229 18:47:19.677571   44399 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 18:47:19.677606   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .DriverName
	I0229 18:47:19.677840   44399 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 18:47:19.677880   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHHostname
	I0229 18:47:19.680494   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:19.680953   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:47:10 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:47:19.680982   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:19.681169   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHPort
	I0229 18:47:19.681386   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:47:19.681577   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHUsername
	I0229 18:47:19.681774   44399 sshutil.go:53] new ssh client: &{IP:192.168.83.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/old-k8s-version-631080/id_rsa Username:docker}
	I0229 18:47:19.774677   44399 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 18:47:19.780268   44399 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 18:47:19.780305   44399 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6428/.minikube/addons for local assets ...
	I0229 18:47:19.780372   44399 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6428/.minikube/files for local assets ...
	I0229 18:47:19.780464   44399 filesync.go:149] local asset: /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem -> 136512.pem in /etc/ssl/certs
	I0229 18:47:19.780560   44399 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 18:47:19.793462   44399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem --> /etc/ssl/certs/136512.pem (1708 bytes)
	I0229 18:47:19.827834   44399 start.go:303] post-start completed in 150.26432ms
	I0229 18:47:19.827888   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetConfigRaw
	I0229 18:47:19.828617   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetIP
	I0229 18:47:19.831703   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:19.832101   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:47:10 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:47:19.832156   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:19.832447   44399 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/config.json ...
	I0229 18:47:19.832629   44399 start.go:128] duration metric: createHost completed in 25.379025184s
	I0229 18:47:19.832655   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHHostname
	I0229 18:47:19.835297   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:19.835694   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:47:10 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:47:19.835724   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:19.835936   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHPort
	I0229 18:47:19.836166   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:47:19.836336   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:47:19.836502   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHUsername
	I0229 18:47:19.836727   44399 main.go:141] libmachine: Using SSH client type: native
	I0229 18:47:19.836929   44399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.83.214 22 <nil> <nil>}
	I0229 18:47:19.836950   44399 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 18:47:19.960285   44399 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709232439.945419352
	
	I0229 18:47:19.960317   44399 fix.go:206] guest clock: 1709232439.945419352
	I0229 18:47:19.960326   44399 fix.go:219] Guest: 2024-02-29 18:47:19.945419352 +0000 UTC Remote: 2024-02-29 18:47:19.832640557 +0000 UTC m=+25.510768927 (delta=112.778795ms)
	I0229 18:47:19.960359   44399 fix.go:190] guest clock delta is within tolerance: 112.778795ms
	I0229 18:47:19.960373   44399 start.go:83] releasing machines lock for "old-k8s-version-631080", held for 25.506866987s
	I0229 18:47:19.960402   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .DriverName
	I0229 18:47:19.960711   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetIP
	I0229 18:47:19.963691   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:19.964054   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:47:10 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:47:19.964091   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:19.964269   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .DriverName
	I0229 18:47:19.964882   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .DriverName
	I0229 18:47:19.965100   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .DriverName
	I0229 18:47:19.965195   44399 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 18:47:19.965240   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHHostname
	I0229 18:47:19.965332   44399 ssh_runner.go:195] Run: cat /version.json
	I0229 18:47:19.965358   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHHostname
	I0229 18:47:19.967874   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:19.968255   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:19.968285   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:47:10 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:47:19.968305   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:19.968559   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHPort
	I0229 18:47:19.968602   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:47:10 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:47:19.968668   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:19.968768   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:47:19.968968   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHPort
	I0229 18:47:19.968970   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHUsername
	I0229 18:47:19.969139   44399 sshutil.go:53] new ssh client: &{IP:192.168.83.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/old-k8s-version-631080/id_rsa Username:docker}
	I0229 18:47:19.969180   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:47:19.969318   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHUsername
	I0229 18:47:19.969424   44399 sshutil.go:53] new ssh client: &{IP:192.168.83.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/old-k8s-version-631080/id_rsa Username:docker}
	I0229 18:47:20.053589   44399 ssh_runner.go:195] Run: systemctl --version
	I0229 18:47:20.077103   44399 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0229 18:47:20.252649   44399 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 18:47:20.261187   44399 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 18:47:20.261247   44399 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 18:47:20.283948   44399 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 18:47:20.283971   44399 start.go:475] detecting cgroup driver to use...
	I0229 18:47:20.284054   44399 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 18:47:20.307439   44399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 18:47:20.323544   44399 docker.go:217] disabling cri-docker service (if available) ...
	I0229 18:47:20.323623   44399 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 18:47:20.340633   44399 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 18:47:20.357816   44399 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 18:47:20.500836   44399 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 18:47:20.670745   44399 docker.go:233] disabling docker service ...
	I0229 18:47:20.670818   44399 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 18:47:20.693963   44399 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 18:47:20.709322   44399 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 18:47:20.857494   44399 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 18:47:20.974334   44399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 18:47:20.989627   44399 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 18:47:21.011314   44399 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0229 18:47:21.011383   44399 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:47:21.023324   44399 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0229 18:47:21.023376   44399 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:47:21.034944   44399 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:47:21.047132   44399 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:47:21.058481   44399 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 18:47:21.069981   44399 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 18:47:21.080871   44399 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 18:47:21.080937   44399 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0229 18:47:21.094616   44399 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 18:47:21.104898   44399 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:47:21.218429   44399 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0229 18:47:21.365983   44399 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0229 18:47:21.366055   44399 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0229 18:47:21.371683   44399 start.go:543] Will wait 60s for crictl version
	I0229 18:47:21.371735   44399 ssh_runner.go:195] Run: which crictl
	I0229 18:47:21.376150   44399 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 18:47:21.411742   44399 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0229 18:47:21.411815   44399 ssh_runner.go:195] Run: crio --version
	I0229 18:47:21.445410   44399 ssh_runner.go:195] Run: crio --version
	I0229 18:47:21.479306   44399 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.29.1 ...
	I0229 18:47:19.986847   44536 machine.go:88] provisioning docker machine ...
	I0229 18:47:19.986875   44536 main.go:141] libmachine: (pause-848791) Calling .DriverName
	I0229 18:47:19.987076   44536 main.go:141] libmachine: (pause-848791) Calling .GetMachineName
	I0229 18:47:19.987230   44536 buildroot.go:166] provisioning hostname "pause-848791"
	I0229 18:47:19.987244   44536 main.go:141] libmachine: (pause-848791) Calling .GetMachineName
	I0229 18:47:19.987387   44536 main.go:141] libmachine: (pause-848791) Calling .GetSSHHostname
	I0229 18:47:19.989955   44536 main.go:141] libmachine: (pause-848791) DBG | domain pause-848791 has defined MAC address 52:54:00:00:ed:d1 in network mk-pause-848791
	I0229 18:47:19.990378   44536 main.go:141] libmachine: (pause-848791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:ed:d1", ip: ""} in network mk-pause-848791: {Iface:virbr1 ExpiryTime:2024-02-29 19:45:31 +0000 UTC Type:0 Mac:52:54:00:00:ed:d1 Iaid: IPaddr:192.168.72.95 Prefix:24 Hostname:pause-848791 Clientid:01:52:54:00:00:ed:d1}
	I0229 18:47:19.990414   44536 main.go:141] libmachine: (pause-848791) DBG | domain pause-848791 has defined IP address 192.168.72.95 and MAC address 52:54:00:00:ed:d1 in network mk-pause-848791
	I0229 18:47:19.990614   44536 main.go:141] libmachine: (pause-848791) Calling .GetSSHPort
	I0229 18:47:19.990793   44536 main.go:141] libmachine: (pause-848791) Calling .GetSSHKeyPath
	I0229 18:47:19.990991   44536 main.go:141] libmachine: (pause-848791) Calling .GetSSHKeyPath
	I0229 18:47:19.991136   44536 main.go:141] libmachine: (pause-848791) Calling .GetSSHUsername
	I0229 18:47:19.991321   44536 main.go:141] libmachine: Using SSH client type: native
	I0229 18:47:19.991590   44536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.95 22 <nil> <nil>}
	I0229 18:47:19.991611   44536 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-848791 && echo "pause-848791" | sudo tee /etc/hostname
	I0229 18:47:20.114872   44536 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-848791
	
	I0229 18:47:20.114908   44536 main.go:141] libmachine: (pause-848791) Calling .GetSSHHostname
	I0229 18:47:20.118232   44536 main.go:141] libmachine: (pause-848791) DBG | domain pause-848791 has defined MAC address 52:54:00:00:ed:d1 in network mk-pause-848791
	I0229 18:47:20.118523   44536 main.go:141] libmachine: (pause-848791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:ed:d1", ip: ""} in network mk-pause-848791: {Iface:virbr1 ExpiryTime:2024-02-29 19:45:31 +0000 UTC Type:0 Mac:52:54:00:00:ed:d1 Iaid: IPaddr:192.168.72.95 Prefix:24 Hostname:pause-848791 Clientid:01:52:54:00:00:ed:d1}
	I0229 18:47:20.118551   44536 main.go:141] libmachine: (pause-848791) DBG | domain pause-848791 has defined IP address 192.168.72.95 and MAC address 52:54:00:00:ed:d1 in network mk-pause-848791
	I0229 18:47:20.118830   44536 main.go:141] libmachine: (pause-848791) Calling .GetSSHPort
	I0229 18:47:20.119167   44536 main.go:141] libmachine: (pause-848791) Calling .GetSSHKeyPath
	I0229 18:47:20.119319   44536 main.go:141] libmachine: (pause-848791) Calling .GetSSHKeyPath
	I0229 18:47:20.119489   44536 main.go:141] libmachine: (pause-848791) Calling .GetSSHUsername
	I0229 18:47:20.119700   44536 main.go:141] libmachine: Using SSH client type: native
	I0229 18:47:20.119949   44536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.95 22 <nil> <nil>}
	I0229 18:47:20.119975   44536 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-848791' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-848791/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-848791' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 18:47:20.244822   44536 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 18:47:20.244854   44536 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18259-6428/.minikube CaCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18259-6428/.minikube}
	I0229 18:47:20.244881   44536 buildroot.go:174] setting up certificates
	I0229 18:47:20.244891   44536 provision.go:83] configureAuth start
	I0229 18:47:20.244901   44536 main.go:141] libmachine: (pause-848791) Calling .GetMachineName
	I0229 18:47:20.245215   44536 main.go:141] libmachine: (pause-848791) Calling .GetIP
	I0229 18:47:20.248167   44536 main.go:141] libmachine: (pause-848791) DBG | domain pause-848791 has defined MAC address 52:54:00:00:ed:d1 in network mk-pause-848791
	I0229 18:47:20.248531   44536 main.go:141] libmachine: (pause-848791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:ed:d1", ip: ""} in network mk-pause-848791: {Iface:virbr1 ExpiryTime:2024-02-29 19:45:31 +0000 UTC Type:0 Mac:52:54:00:00:ed:d1 Iaid: IPaddr:192.168.72.95 Prefix:24 Hostname:pause-848791 Clientid:01:52:54:00:00:ed:d1}
	I0229 18:47:20.248567   44536 main.go:141] libmachine: (pause-848791) DBG | domain pause-848791 has defined IP address 192.168.72.95 and MAC address 52:54:00:00:ed:d1 in network mk-pause-848791
	I0229 18:47:20.248729   44536 main.go:141] libmachine: (pause-848791) Calling .GetSSHHostname
	I0229 18:47:20.251491   44536 main.go:141] libmachine: (pause-848791) DBG | domain pause-848791 has defined MAC address 52:54:00:00:ed:d1 in network mk-pause-848791
	I0229 18:47:20.251852   44536 main.go:141] libmachine: (pause-848791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:ed:d1", ip: ""} in network mk-pause-848791: {Iface:virbr1 ExpiryTime:2024-02-29 19:45:31 +0000 UTC Type:0 Mac:52:54:00:00:ed:d1 Iaid: IPaddr:192.168.72.95 Prefix:24 Hostname:pause-848791 Clientid:01:52:54:00:00:ed:d1}
	I0229 18:47:20.251895   44536 main.go:141] libmachine: (pause-848791) DBG | domain pause-848791 has defined IP address 192.168.72.95 and MAC address 52:54:00:00:ed:d1 in network mk-pause-848791
	I0229 18:47:20.252041   44536 provision.go:138] copyHostCerts
	I0229 18:47:20.252114   44536 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem, removing ...
	I0229 18:47:20.252134   44536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem
	I0229 18:47:20.252209   44536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem (1675 bytes)
	I0229 18:47:20.252346   44536 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem, removing ...
	I0229 18:47:20.252358   44536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem
	I0229 18:47:20.252391   44536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem (1082 bytes)
	I0229 18:47:20.252487   44536 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem, removing ...
	I0229 18:47:20.252498   44536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem
	I0229 18:47:20.252526   44536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem (1123 bytes)
	I0229 18:47:20.252631   44536 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem org=jenkins.pause-848791 san=[192.168.72.95 192.168.72.95 localhost 127.0.0.1 minikube pause-848791]
	I0229 18:47:20.337511   44536 provision.go:172] copyRemoteCerts
	I0229 18:47:20.337563   44536 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 18:47:20.337590   44536 main.go:141] libmachine: (pause-848791) Calling .GetSSHHostname
	I0229 18:47:20.340832   44536 main.go:141] libmachine: (pause-848791) DBG | domain pause-848791 has defined MAC address 52:54:00:00:ed:d1 in network mk-pause-848791
	I0229 18:47:20.341164   44536 main.go:141] libmachine: (pause-848791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:ed:d1", ip: ""} in network mk-pause-848791: {Iface:virbr1 ExpiryTime:2024-02-29 19:45:31 +0000 UTC Type:0 Mac:52:54:00:00:ed:d1 Iaid: IPaddr:192.168.72.95 Prefix:24 Hostname:pause-848791 Clientid:01:52:54:00:00:ed:d1}
	I0229 18:47:20.341201   44536 main.go:141] libmachine: (pause-848791) DBG | domain pause-848791 has defined IP address 192.168.72.95 and MAC address 52:54:00:00:ed:d1 in network mk-pause-848791
	I0229 18:47:20.341439   44536 main.go:141] libmachine: (pause-848791) Calling .GetSSHPort
	I0229 18:47:20.341666   44536 main.go:141] libmachine: (pause-848791) Calling .GetSSHKeyPath
	I0229 18:47:20.341972   44536 main.go:141] libmachine: (pause-848791) Calling .GetSSHUsername
	I0229 18:47:20.342139   44536 sshutil.go:53] new ssh client: &{IP:192.168.72.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/pause-848791/id_rsa Username:docker}
	I0229 18:47:20.433713   44536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0229 18:47:20.464741   44536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 18:47:20.494915   44536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0229 18:47:20.530916   44536 provision.go:86] duration metric: configureAuth took 286.011618ms
	I0229 18:47:20.530949   44536 buildroot.go:189] setting minikube options for container-runtime
	I0229 18:47:20.531280   44536 config.go:182] Loaded profile config "pause-848791": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 18:47:20.531376   44536 main.go:141] libmachine: (pause-848791) Calling .GetSSHHostname
	I0229 18:47:20.534690   44536 main.go:141] libmachine: (pause-848791) DBG | domain pause-848791 has defined MAC address 52:54:00:00:ed:d1 in network mk-pause-848791
	I0229 18:47:20.535196   44536 main.go:141] libmachine: (pause-848791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:ed:d1", ip: ""} in network mk-pause-848791: {Iface:virbr1 ExpiryTime:2024-02-29 19:45:31 +0000 UTC Type:0 Mac:52:54:00:00:ed:d1 Iaid: IPaddr:192.168.72.95 Prefix:24 Hostname:pause-848791 Clientid:01:52:54:00:00:ed:d1}
	I0229 18:47:20.535237   44536 main.go:141] libmachine: (pause-848791) DBG | domain pause-848791 has defined IP address 192.168.72.95 and MAC address 52:54:00:00:ed:d1 in network mk-pause-848791
	I0229 18:47:20.535426   44536 main.go:141] libmachine: (pause-848791) Calling .GetSSHPort
	I0229 18:47:20.535693   44536 main.go:141] libmachine: (pause-848791) Calling .GetSSHKeyPath
	I0229 18:47:20.535881   44536 main.go:141] libmachine: (pause-848791) Calling .GetSSHKeyPath
	I0229 18:47:20.536072   44536 main.go:141] libmachine: (pause-848791) Calling .GetSSHUsername
	I0229 18:47:20.536302   44536 main.go:141] libmachine: Using SSH client type: native
	I0229 18:47:20.536526   44536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.95 22 <nil> <nil>}
	I0229 18:47:20.536549   44536 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0229 18:47:21.480717   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetIP
	I0229 18:47:21.483538   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:21.484275   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:47:10 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:47:21.484313   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:21.484389   44399 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0229 18:47:21.489479   44399 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:47:21.504264   44399 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0229 18:47:21.504323   44399 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 18:47:21.544230   44399 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0229 18:47:21.544292   44399 ssh_runner.go:195] Run: which lz4
	I0229 18:47:21.549054   44399 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0229 18:47:21.553913   44399 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 18:47:21.553942   44399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0229 18:47:23.377275   44399 crio.go:444] Took 1.828245 seconds to copy over tarball
	I0229 18:47:23.377346   44399 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 18:47:28.864966   44658 start.go:369] acquired machines lock for "kubernetes-upgrade-541086" in 21.94490984s
	I0229 18:47:28.865043   44658 start.go:96] Skipping create...Using existing machine configuration
	I0229 18:47:28.865054   44658 fix.go:54] fixHost starting: 
	I0229 18:47:28.865450   44658 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 18:47:28.865495   44658 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:47:28.883432   44658 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44011
	I0229 18:47:28.883793   44658 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:47:28.884438   44658 main.go:141] libmachine: Using API Version  1
	I0229 18:47:28.884460   44658 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:47:28.884904   44658 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:47:28.885106   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .DriverName
	I0229 18:47:28.885297   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetState
	I0229 18:47:28.887067   44658 fix.go:102] recreateIfNeeded on kubernetes-upgrade-541086: state=Running err=<nil>
	W0229 18:47:28.887089   44658 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 18:47:28.888815   44658 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-541086" VM ...
	I0229 18:47:26.113551   44399 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.736173489s)
	I0229 18:47:26.113586   44399 crio.go:451] Took 2.736285 seconds to extract the tarball
	I0229 18:47:26.113598   44399 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 18:47:26.159257   44399 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 18:47:26.230903   44399 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0229 18:47:26.230931   44399 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0229 18:47:26.231011   44399 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:47:26.231322   44399 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0229 18:47:26.231335   44399 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0229 18:47:26.231463   44399 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0229 18:47:26.231522   44399 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 18:47:26.231651   44399 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 18:47:26.231721   44399 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 18:47:26.231836   44399 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 18:47:26.233270   44399 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 18:47:26.233322   44399 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 18:47:26.233338   44399 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 18:47:26.233270   44399 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0229 18:47:26.233269   44399 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 18:47:26.233537   44399 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0229 18:47:26.233581   44399 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:47:26.233604   44399 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0229 18:47:26.425313   44399 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0229 18:47:26.472944   44399 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0229 18:47:26.472976   44399 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0229 18:47:26.473012   44399 ssh_runner.go:195] Run: which crictl
	I0229 18:47:26.478466   44399 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0229 18:47:26.510985   44399 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0229 18:47:26.512523   44399 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0229 18:47:26.512615   44399 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0229 18:47:26.514955   44399 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 18:47:26.517056   44399 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0229 18:47:26.518110   44399 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0229 18:47:26.521570   44399 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0229 18:47:26.676126   44399 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0229 18:47:26.676160   44399 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 18:47:26.676202   44399 ssh_runner.go:195] Run: which crictl
	I0229 18:47:26.700309   44399 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0229 18:47:26.700353   44399 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 18:47:26.700407   44399 ssh_runner.go:195] Run: which crictl
	I0229 18:47:26.711695   44399 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0229 18:47:26.711734   44399 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 18:47:26.711768   44399 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0229 18:47:26.711779   44399 ssh_runner.go:195] Run: which crictl
	I0229 18:47:26.711810   44399 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 18:47:26.711862   44399 ssh_runner.go:195] Run: which crictl
	I0229 18:47:26.712901   44399 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0229 18:47:26.712937   44399 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0229 18:47:26.712970   44399 ssh_runner.go:195] Run: which crictl
	I0229 18:47:26.719226   44399 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0229 18:47:26.719242   44399 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0229 18:47:26.719576   44399 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0229 18:47:26.720853   44399 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0229 18:47:26.720876   44399 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 18:47:26.720880   44399 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0229 18:47:26.720910   44399 ssh_runner.go:195] Run: which crictl
	I0229 18:47:26.726539   44399 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0229 18:47:26.861202   44399 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0229 18:47:26.861239   44399 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0229 18:47:26.861320   44399 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0229 18:47:26.861384   44399 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0229 18:47:26.861423   44399 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0229 18:47:26.861442   44399 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0229 18:47:26.899807   44399 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0229 18:47:27.192295   44399 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:47:27.346152   44399 cache_images.go:92] LoadImages completed in 1.115199635s
	W0229 18:47:27.346250   44399 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2: no such file or directory
	I0229 18:47:27.346330   44399 ssh_runner.go:195] Run: crio config
	I0229 18:47:27.415867   44399 cni.go:84] Creating CNI manager for ""
	I0229 18:47:27.415890   44399 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 18:47:27.415909   44399 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 18:47:27.415932   44399 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.214 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-631080 NodeName:old-k8s-version-631080 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.214"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.214 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0229 18:47:27.416083   44399 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.214
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-631080"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.214
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.214"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-631080
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.83.214:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 18:47:27.416173   44399 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-631080 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.83.214
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-631080 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 18:47:27.416233   44399 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0229 18:47:27.428417   44399 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 18:47:27.428489   44399 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 18:47:27.440371   44399 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0229 18:47:27.461663   44399 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 18:47:27.481045   44399 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I0229 18:47:27.500501   44399 ssh_runner.go:195] Run: grep 192.168.83.214	control-plane.minikube.internal$ /etc/hosts
	I0229 18:47:27.505078   44399 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.214	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:47:27.520460   44399 certs.go:56] Setting up /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080 for IP: 192.168.83.214
	I0229 18:47:27.520497   44399 certs.go:190] acquiring lock for shared ca certs: {Name:mk3505d2468da66ac20dc3ebf913782cecf1ba0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:47:27.520650   44399 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18259-6428/.minikube/ca.key
	I0229 18:47:27.520707   44399 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.key
	I0229 18:47:27.520766   44399 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/client.key
	I0229 18:47:27.520784   44399 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/client.crt with IP's: []
	I0229 18:47:27.864189   44399 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/client.crt ...
	I0229 18:47:27.864218   44399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/client.crt: {Name:mk8fd53eb0b8d5b17fbea8f891f6884eeff3e169 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:47:27.864375   44399 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/client.key ...
	I0229 18:47:27.864388   44399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/client.key: {Name:mkae5ee58641b4deefdd16ee54eec9cef558c1be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:47:27.864459   44399 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/apiserver.key.89a58109
	I0229 18:47:27.864474   44399 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/apiserver.crt.89a58109 with IP's: [192.168.83.214 10.96.0.1 127.0.0.1 10.0.0.1]
	I0229 18:47:27.938293   44399 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/apiserver.crt.89a58109 ...
	I0229 18:47:27.938324   44399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/apiserver.crt.89a58109: {Name:mk03c2e3b7b0f5688b90b82a5d3b6a3e198d646f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:47:27.938503   44399 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/apiserver.key.89a58109 ...
	I0229 18:47:27.938539   44399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/apiserver.key.89a58109: {Name:mkf8eea34d8de97d5d0f70aeb5b2b830c1240c1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:47:27.938662   44399 certs.go:337] copying /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/apiserver.crt.89a58109 -> /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/apiserver.crt
	I0229 18:47:27.938755   44399 certs.go:341] copying /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/apiserver.key.89a58109 -> /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/apiserver.key
	I0229 18:47:27.938834   44399 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/proxy-client.key
	I0229 18:47:27.938856   44399 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/proxy-client.crt with IP's: []
	I0229 18:47:28.153554   44399 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/proxy-client.crt ...
	I0229 18:47:28.153585   44399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/proxy-client.crt: {Name:mkef1cb59c0851a60f74685c02d6c4b49a29cffd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:47:28.153758   44399 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/proxy-client.key ...
	I0229 18:47:28.153775   44399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/proxy-client.key: {Name:mk4c5236bac24ddb7c6a48fbc9c96d9664cc4ed7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:47:28.153996   44399 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651.pem (1338 bytes)
	W0229 18:47:28.154048   44399 certs.go:433] ignoring /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651_empty.pem, impossibly tiny 0 bytes
	I0229 18:47:28.154064   44399 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem (1675 bytes)
	I0229 18:47:28.154094   44399 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem (1082 bytes)
	I0229 18:47:28.154134   44399 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem (1123 bytes)
	I0229 18:47:28.154168   44399 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem (1675 bytes)
	I0229 18:47:28.154226   44399 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem (1708 bytes)
	I0229 18:47:28.154826   44399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 18:47:28.186389   44399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0229 18:47:28.215574   44399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 18:47:28.245800   44399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 18:47:28.303841   44399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 18:47:28.335355   44399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0229 18:47:28.363388   44399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 18:47:28.393630   44399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0229 18:47:28.423875   44399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651.pem --> /usr/share/ca-certificates/13651.pem (1338 bytes)
	I0229 18:47:28.454377   44399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem --> /usr/share/ca-certificates/136512.pem (1708 bytes)
	I0229 18:47:28.483445   44399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 18:47:28.511360   44399 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 18:47:28.530599   44399 ssh_runner.go:195] Run: openssl version
	I0229 18:47:28.537246   44399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 18:47:28.549806   44399 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:47:28.555377   44399 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 17:40 /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:47:28.555437   44399 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:47:28.563093   44399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 18:47:28.578569   44399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13651.pem && ln -fs /usr/share/ca-certificates/13651.pem /etc/ssl/certs/13651.pem"
	I0229 18:47:28.593874   44399 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13651.pem
	I0229 18:47:28.600519   44399 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 17:49 /usr/share/ca-certificates/13651.pem
	I0229 18:47:28.600560   44399 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13651.pem
	I0229 18:47:28.607700   44399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13651.pem /etc/ssl/certs/51391683.0"
	I0229 18:47:28.623722   44399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136512.pem && ln -fs /usr/share/ca-certificates/136512.pem /etc/ssl/certs/136512.pem"
	I0229 18:47:28.636281   44399 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136512.pem
	I0229 18:47:28.641592   44399 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 17:49 /usr/share/ca-certificates/136512.pem
	I0229 18:47:28.641642   44399 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136512.pem
	I0229 18:47:28.649008   44399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136512.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 18:47:28.661558   44399 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 18:47:28.666198   44399 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 18:47:28.666249   44399 kubeadm.go:404] StartCluster: {Name:old-k8s-version-631080 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-631080 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.83.214 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:47:28.666317   44399 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0229 18:47:28.666358   44399 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 18:47:28.716681   44399 cri.go:89] found id: ""
	I0229 18:47:28.716742   44399 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 18:47:28.728300   44399 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 18:47:28.739080   44399 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 18:47:28.749442   44399 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 18:47:28.749488   44399 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 18:47:28.881883   44399 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 18:47:28.881987   44399 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 18:47:29.160390   44399 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 18:47:29.160580   44399 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 18:47:29.160724   44399 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 18:47:29.448897   44399 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 18:47:29.451654   44399 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 18:47:29.462589   44399 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 18:47:29.601848   44399 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 18:47:28.889952   44658 machine.go:88] provisioning docker machine ...
	I0229 18:47:28.889975   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .DriverName
	I0229 18:47:28.890143   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetMachineName
	I0229 18:47:28.890280   44658 buildroot.go:166] provisioning hostname "kubernetes-upgrade-541086"
	I0229 18:47:28.890302   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetMachineName
	I0229 18:47:28.890432   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHHostname
	I0229 18:47:28.893192   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | domain kubernetes-upgrade-541086 has defined MAC address 52:54:00:2d:88:b9 in network mk-kubernetes-upgrade-541086
	I0229 18:47:28.893695   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:88:b9", ip: ""} in network mk-kubernetes-upgrade-541086: {Iface:virbr2 ExpiryTime:2024-02-29 19:46:42 +0000 UTC Type:0 Mac:52:54:00:2d:88:b9 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:kubernetes-upgrade-541086 Clientid:01:52:54:00:2d:88:b9}
	I0229 18:47:28.893722   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | domain kubernetes-upgrade-541086 has defined IP address 192.168.50.47 and MAC address 52:54:00:2d:88:b9 in network mk-kubernetes-upgrade-541086
	I0229 18:47:28.893854   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHPort
	I0229 18:47:28.893976   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHKeyPath
	I0229 18:47:28.894112   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHKeyPath
	I0229 18:47:28.894255   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHUsername
	I0229 18:47:28.894409   44658 main.go:141] libmachine: Using SSH client type: native
	I0229 18:47:28.894625   44658 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.47 22 <nil> <nil>}
	I0229 18:47:28.894638   44658 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-541086 && echo "kubernetes-upgrade-541086" | sudo tee /etc/hostname
	I0229 18:47:29.048985   44658 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-541086
	
	I0229 18:47:29.049020   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHHostname
	I0229 18:47:29.052163   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | domain kubernetes-upgrade-541086 has defined MAC address 52:54:00:2d:88:b9 in network mk-kubernetes-upgrade-541086
	I0229 18:47:29.052576   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:88:b9", ip: ""} in network mk-kubernetes-upgrade-541086: {Iface:virbr2 ExpiryTime:2024-02-29 19:46:42 +0000 UTC Type:0 Mac:52:54:00:2d:88:b9 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:kubernetes-upgrade-541086 Clientid:01:52:54:00:2d:88:b9}
	I0229 18:47:29.052652   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | domain kubernetes-upgrade-541086 has defined IP address 192.168.50.47 and MAC address 52:54:00:2d:88:b9 in network mk-kubernetes-upgrade-541086
	I0229 18:47:29.052734   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHPort
	I0229 18:47:29.052934   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHKeyPath
	I0229 18:47:29.053105   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHKeyPath
	I0229 18:47:29.053239   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHUsername
	I0229 18:47:29.053394   44658 main.go:141] libmachine: Using SSH client type: native
	I0229 18:47:29.053609   44658 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.47 22 <nil> <nil>}
	I0229 18:47:29.053635   44658 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-541086' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-541086/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-541086' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 18:47:29.177946   44658 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 18:47:29.177972   44658 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18259-6428/.minikube CaCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18259-6428/.minikube}
	I0229 18:47:29.177993   44658 buildroot.go:174] setting up certificates
	I0229 18:47:29.178003   44658 provision.go:83] configureAuth start
	I0229 18:47:29.178020   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetMachineName
	I0229 18:47:29.178284   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetIP
	I0229 18:47:29.181167   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | domain kubernetes-upgrade-541086 has defined MAC address 52:54:00:2d:88:b9 in network mk-kubernetes-upgrade-541086
	I0229 18:47:29.181552   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:88:b9", ip: ""} in network mk-kubernetes-upgrade-541086: {Iface:virbr2 ExpiryTime:2024-02-29 19:46:42 +0000 UTC Type:0 Mac:52:54:00:2d:88:b9 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:kubernetes-upgrade-541086 Clientid:01:52:54:00:2d:88:b9}
	I0229 18:47:29.181580   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | domain kubernetes-upgrade-541086 has defined IP address 192.168.50.47 and MAC address 52:54:00:2d:88:b9 in network mk-kubernetes-upgrade-541086
	I0229 18:47:29.181740   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHHostname
	I0229 18:47:29.184471   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | domain kubernetes-upgrade-541086 has defined MAC address 52:54:00:2d:88:b9 in network mk-kubernetes-upgrade-541086
	I0229 18:47:29.184855   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:88:b9", ip: ""} in network mk-kubernetes-upgrade-541086: {Iface:virbr2 ExpiryTime:2024-02-29 19:46:42 +0000 UTC Type:0 Mac:52:54:00:2d:88:b9 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:kubernetes-upgrade-541086 Clientid:01:52:54:00:2d:88:b9}
	I0229 18:47:29.184884   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | domain kubernetes-upgrade-541086 has defined IP address 192.168.50.47 and MAC address 52:54:00:2d:88:b9 in network mk-kubernetes-upgrade-541086
	I0229 18:47:29.185050   44658 provision.go:138] copyHostCerts
	I0229 18:47:29.185107   44658 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem, removing ...
	I0229 18:47:29.185130   44658 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem
	I0229 18:47:29.185197   44658 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem (1082 bytes)
	I0229 18:47:29.185317   44658 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem, removing ...
	I0229 18:47:29.185330   44658 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem
	I0229 18:47:29.185363   44658 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem (1123 bytes)
	I0229 18:47:29.185464   44658 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem, removing ...
	I0229 18:47:29.185475   44658 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem
	I0229 18:47:29.185505   44658 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem (1675 bytes)
	I0229 18:47:29.185572   44658 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-541086 san=[192.168.50.47 192.168.50.47 localhost 127.0.0.1 minikube kubernetes-upgrade-541086]
	I0229 18:47:29.466955   44658 provision.go:172] copyRemoteCerts
	I0229 18:47:29.467043   44658 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 18:47:29.467073   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHHostname
	I0229 18:47:29.470703   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | domain kubernetes-upgrade-541086 has defined MAC address 52:54:00:2d:88:b9 in network mk-kubernetes-upgrade-541086
	I0229 18:47:29.471231   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:88:b9", ip: ""} in network mk-kubernetes-upgrade-541086: {Iface:virbr2 ExpiryTime:2024-02-29 19:46:42 +0000 UTC Type:0 Mac:52:54:00:2d:88:b9 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:kubernetes-upgrade-541086 Clientid:01:52:54:00:2d:88:b9}
	I0229 18:47:29.471275   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | domain kubernetes-upgrade-541086 has defined IP address 192.168.50.47 and MAC address 52:54:00:2d:88:b9 in network mk-kubernetes-upgrade-541086
	I0229 18:47:29.471484   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHPort
	I0229 18:47:29.471701   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHKeyPath
	I0229 18:47:29.471869   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHUsername
	I0229 18:47:29.472007   44658 sshutil.go:53] new ssh client: &{IP:192.168.50.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/kubernetes-upgrade-541086/id_rsa Username:docker}
	I0229 18:47:29.575443   44658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 18:47:29.615607   44658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0229 18:47:29.669271   44658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0229 18:47:29.718171   44658 provision.go:86] duration metric: configureAuth took 540.154581ms
	I0229 18:47:29.718198   44658 buildroot.go:189] setting minikube options for container-runtime
	I0229 18:47:29.718449   44658 config.go:182] Loaded profile config "kubernetes-upgrade-541086": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0229 18:47:29.718542   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHHostname
	I0229 18:47:29.726541   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | domain kubernetes-upgrade-541086 has defined MAC address 52:54:00:2d:88:b9 in network mk-kubernetes-upgrade-541086
	I0229 18:47:29.727112   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:88:b9", ip: ""} in network mk-kubernetes-upgrade-541086: {Iface:virbr2 ExpiryTime:2024-02-29 19:46:42 +0000 UTC Type:0 Mac:52:54:00:2d:88:b9 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:kubernetes-upgrade-541086 Clientid:01:52:54:00:2d:88:b9}
	I0229 18:47:29.727152   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | domain kubernetes-upgrade-541086 has defined IP address 192.168.50.47 and MAC address 52:54:00:2d:88:b9 in network mk-kubernetes-upgrade-541086
	I0229 18:47:29.727577   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHPort
	I0229 18:47:29.727808   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHKeyPath
	I0229 18:47:29.727999   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHKeyPath
	I0229 18:47:29.728214   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHUsername
	I0229 18:47:29.728448   44658 main.go:141] libmachine: Using SSH client type: native
	I0229 18:47:29.728654   44658 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.47 22 <nil> <nil>}
	I0229 18:47:29.728672   44658 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0229 18:47:29.603554   44399 out.go:204]   - Generating certificates and keys ...
	I0229 18:47:29.603696   44399 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 18:47:29.603821   44399 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 18:47:29.908358   44399 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0229 18:47:30.012736   44399 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0229 18:47:30.271926   44399 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0229 18:47:30.387535   44399 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0229 18:47:30.445345   44399 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0229 18:47:30.445529   44399 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [old-k8s-version-631080 localhost] and IPs [192.168.83.214 127.0.0.1 ::1]
	I0229 18:47:30.689130   44399 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0229 18:47:30.689407   44399 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-631080 localhost] and IPs [192.168.83.214 127.0.0.1 ::1]
	I0229 18:47:30.818464   44399 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0229 18:47:31.124163   44399 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0229 18:47:31.324195   44399 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0229 18:47:31.324565   44399 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 18:47:31.499366   44399 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 18:47:31.728125   44399 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 18:47:31.948091   44399 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 18:47:32.121471   44399 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 18:47:32.122506   44399 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 18:47:28.615338   44536 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0229 18:47:28.615359   44536 machine.go:91] provisioned docker machine in 8.628498441s
	I0229 18:47:28.615372   44536 start.go:300] post-start starting for "pause-848791" (driver="kvm2")
	I0229 18:47:28.615382   44536 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 18:47:28.615409   44536 main.go:141] libmachine: (pause-848791) Calling .DriverName
	I0229 18:47:28.615730   44536 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 18:47:28.615769   44536 main.go:141] libmachine: (pause-848791) Calling .GetSSHHostname
	I0229 18:47:28.618920   44536 main.go:141] libmachine: (pause-848791) DBG | domain pause-848791 has defined MAC address 52:54:00:00:ed:d1 in network mk-pause-848791
	I0229 18:47:28.619414   44536 main.go:141] libmachine: (pause-848791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:ed:d1", ip: ""} in network mk-pause-848791: {Iface:virbr1 ExpiryTime:2024-02-29 19:45:31 +0000 UTC Type:0 Mac:52:54:00:00:ed:d1 Iaid: IPaddr:192.168.72.95 Prefix:24 Hostname:pause-848791 Clientid:01:52:54:00:00:ed:d1}
	I0229 18:47:28.619452   44536 main.go:141] libmachine: (pause-848791) DBG | domain pause-848791 has defined IP address 192.168.72.95 and MAC address 52:54:00:00:ed:d1 in network mk-pause-848791
	I0229 18:47:28.619746   44536 main.go:141] libmachine: (pause-848791) Calling .GetSSHPort
	I0229 18:47:28.619937   44536 main.go:141] libmachine: (pause-848791) Calling .GetSSHKeyPath
	I0229 18:47:28.620136   44536 main.go:141] libmachine: (pause-848791) Calling .GetSSHUsername
	I0229 18:47:28.620324   44536 sshutil.go:53] new ssh client: &{IP:192.168.72.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/pause-848791/id_rsa Username:docker}
	I0229 18:47:28.708382   44536 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 18:47:28.713618   44536 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 18:47:28.713649   44536 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6428/.minikube/addons for local assets ...
	I0229 18:47:28.713719   44536 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6428/.minikube/files for local assets ...
	I0229 18:47:28.713787   44536 filesync.go:149] local asset: /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem -> 136512.pem in /etc/ssl/certs
	I0229 18:47:28.713901   44536 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 18:47:28.725327   44536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem --> /etc/ssl/certs/136512.pem (1708 bytes)
	I0229 18:47:28.752879   44536 start.go:303] post-start completed in 137.496114ms
	I0229 18:47:28.752905   44536 fix.go:56] fixHost completed within 8.792406355s
	I0229 18:47:28.752930   44536 main.go:141] libmachine: (pause-848791) Calling .GetSSHHostname
	I0229 18:47:28.755749   44536 main.go:141] libmachine: (pause-848791) DBG | domain pause-848791 has defined MAC address 52:54:00:00:ed:d1 in network mk-pause-848791
	I0229 18:47:28.756142   44536 main.go:141] libmachine: (pause-848791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:ed:d1", ip: ""} in network mk-pause-848791: {Iface:virbr1 ExpiryTime:2024-02-29 19:45:31 +0000 UTC Type:0 Mac:52:54:00:00:ed:d1 Iaid: IPaddr:192.168.72.95 Prefix:24 Hostname:pause-848791 Clientid:01:52:54:00:00:ed:d1}
	I0229 18:47:28.756166   44536 main.go:141] libmachine: (pause-848791) DBG | domain pause-848791 has defined IP address 192.168.72.95 and MAC address 52:54:00:00:ed:d1 in network mk-pause-848791
	I0229 18:47:28.756304   44536 main.go:141] libmachine: (pause-848791) Calling .GetSSHPort
	I0229 18:47:28.756494   44536 main.go:141] libmachine: (pause-848791) Calling .GetSSHKeyPath
	I0229 18:47:28.756645   44536 main.go:141] libmachine: (pause-848791) Calling .GetSSHKeyPath
	I0229 18:47:28.756763   44536 main.go:141] libmachine: (pause-848791) Calling .GetSSHUsername
	I0229 18:47:28.756900   44536 main.go:141] libmachine: Using SSH client type: native
	I0229 18:47:28.757096   44536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.95 22 <nil> <nil>}
	I0229 18:47:28.757117   44536 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 18:47:28.864821   44536 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709232448.859541795
	
	I0229 18:47:28.864844   44536 fix.go:206] guest clock: 1709232448.859541795
	I0229 18:47:28.864853   44536 fix.go:219] Guest: 2024-02-29 18:47:28.859541795 +0000 UTC Remote: 2024-02-29 18:47:28.752910369 +0000 UTC m=+30.589786908 (delta=106.631426ms)
	I0229 18:47:28.864878   44536 fix.go:190] guest clock delta is within tolerance: 106.631426ms
	I0229 18:47:28.864893   44536 start.go:83] releasing machines lock for "pause-848791", held for 8.904416272s
	I0229 18:47:28.864923   44536 main.go:141] libmachine: (pause-848791) Calling .DriverName
	I0229 18:47:28.865211   44536 main.go:141] libmachine: (pause-848791) Calling .GetIP
	I0229 18:47:28.868322   44536 main.go:141] libmachine: (pause-848791) DBG | domain pause-848791 has defined MAC address 52:54:00:00:ed:d1 in network mk-pause-848791
	I0229 18:47:28.868734   44536 main.go:141] libmachine: (pause-848791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:ed:d1", ip: ""} in network mk-pause-848791: {Iface:virbr1 ExpiryTime:2024-02-29 19:45:31 +0000 UTC Type:0 Mac:52:54:00:00:ed:d1 Iaid: IPaddr:192.168.72.95 Prefix:24 Hostname:pause-848791 Clientid:01:52:54:00:00:ed:d1}
	I0229 18:47:28.868773   44536 main.go:141] libmachine: (pause-848791) DBG | domain pause-848791 has defined IP address 192.168.72.95 and MAC address 52:54:00:00:ed:d1 in network mk-pause-848791
	I0229 18:47:28.868950   44536 main.go:141] libmachine: (pause-848791) Calling .DriverName
	I0229 18:47:28.869608   44536 main.go:141] libmachine: (pause-848791) Calling .DriverName
	I0229 18:47:28.869799   44536 main.go:141] libmachine: (pause-848791) Calling .DriverName
	I0229 18:47:28.869910   44536 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 18:47:28.869956   44536 main.go:141] libmachine: (pause-848791) Calling .GetSSHHostname
	I0229 18:47:28.870250   44536 ssh_runner.go:195] Run: cat /version.json
	I0229 18:47:28.870275   44536 main.go:141] libmachine: (pause-848791) Calling .GetSSHHostname
	I0229 18:47:28.872964   44536 main.go:141] libmachine: (pause-848791) DBG | domain pause-848791 has defined MAC address 52:54:00:00:ed:d1 in network mk-pause-848791
	I0229 18:47:28.873019   44536 main.go:141] libmachine: (pause-848791) DBG | domain pause-848791 has defined MAC address 52:54:00:00:ed:d1 in network mk-pause-848791
	I0229 18:47:28.873346   44536 main.go:141] libmachine: (pause-848791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:ed:d1", ip: ""} in network mk-pause-848791: {Iface:virbr1 ExpiryTime:2024-02-29 19:45:31 +0000 UTC Type:0 Mac:52:54:00:00:ed:d1 Iaid: IPaddr:192.168.72.95 Prefix:24 Hostname:pause-848791 Clientid:01:52:54:00:00:ed:d1}
	I0229 18:47:28.873380   44536 main.go:141] libmachine: (pause-848791) DBG | domain pause-848791 has defined IP address 192.168.72.95 and MAC address 52:54:00:00:ed:d1 in network mk-pause-848791
	I0229 18:47:28.873409   44536 main.go:141] libmachine: (pause-848791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:ed:d1", ip: ""} in network mk-pause-848791: {Iface:virbr1 ExpiryTime:2024-02-29 19:45:31 +0000 UTC Type:0 Mac:52:54:00:00:ed:d1 Iaid: IPaddr:192.168.72.95 Prefix:24 Hostname:pause-848791 Clientid:01:52:54:00:00:ed:d1}
	I0229 18:47:28.873425   44536 main.go:141] libmachine: (pause-848791) DBG | domain pause-848791 has defined IP address 192.168.72.95 and MAC address 52:54:00:00:ed:d1 in network mk-pause-848791
	I0229 18:47:28.873606   44536 main.go:141] libmachine: (pause-848791) Calling .GetSSHPort
	I0229 18:47:28.873702   44536 main.go:141] libmachine: (pause-848791) Calling .GetSSHPort
	I0229 18:47:28.873785   44536 main.go:141] libmachine: (pause-848791) Calling .GetSSHKeyPath
	I0229 18:47:28.873870   44536 main.go:141] libmachine: (pause-848791) Calling .GetSSHKeyPath
	I0229 18:47:28.873984   44536 main.go:141] libmachine: (pause-848791) Calling .GetSSHUsername
	I0229 18:47:28.874051   44536 main.go:141] libmachine: (pause-848791) Calling .GetSSHUsername
	I0229 18:47:28.874119   44536 sshutil.go:53] new ssh client: &{IP:192.168.72.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/pause-848791/id_rsa Username:docker}
	I0229 18:47:28.874206   44536 sshutil.go:53] new ssh client: &{IP:192.168.72.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/pause-848791/id_rsa Username:docker}
	I0229 18:47:28.978934   44536 ssh_runner.go:195] Run: systemctl --version
	I0229 18:47:28.986710   44536 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0229 18:47:29.153441   44536 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 18:47:29.161741   44536 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 18:47:29.161806   44536 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 18:47:29.174387   44536 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0229 18:47:29.174427   44536 start.go:475] detecting cgroup driver to use...
	I0229 18:47:29.174490   44536 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 18:47:29.196088   44536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 18:47:29.213100   44536 docker.go:217] disabling cri-docker service (if available) ...
	I0229 18:47:29.213152   44536 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 18:47:29.228606   44536 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 18:47:29.244268   44536 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 18:47:29.437957   44536 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 18:47:29.716720   44536 docker.go:233] disabling docker service ...
	I0229 18:47:29.716800   44536 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 18:47:29.858729   44536 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 18:47:30.080888   44536 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 18:47:30.430034   44536 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 18:47:30.792972   44536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 18:47:30.887568   44536 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 18:47:30.911936   44536 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0229 18:47:30.912006   44536 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:47:30.925508   44536 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0229 18:47:30.925562   44536 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:47:30.940749   44536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:47:30.956037   44536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:47:30.969387   44536 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 18:47:30.983294   44536 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 18:47:31.015199   44536 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 18:47:31.034727   44536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:47:31.238169   44536 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0229 18:47:32.124299   44399 out.go:204]   - Booting up control plane ...
	I0229 18:47:32.124427   44399 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 18:47:32.131277   44399 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 18:47:32.132287   44399 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 18:47:32.135893   44399 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 18:47:32.141145   44399 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 18:47:35.693721   44658 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0229 18:47:35.693749   44658 machine.go:91] provisioned docker machine in 6.803780295s
	I0229 18:47:35.693761   44658 start.go:300] post-start starting for "kubernetes-upgrade-541086" (driver="kvm2")
	I0229 18:47:35.693773   44658 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 18:47:35.693795   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .DriverName
	I0229 18:47:35.694110   44658 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 18:47:35.694142   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHHostname
	I0229 18:47:35.697142   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | domain kubernetes-upgrade-541086 has defined MAC address 52:54:00:2d:88:b9 in network mk-kubernetes-upgrade-541086
	I0229 18:47:35.697486   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:88:b9", ip: ""} in network mk-kubernetes-upgrade-541086: {Iface:virbr2 ExpiryTime:2024-02-29 19:46:42 +0000 UTC Type:0 Mac:52:54:00:2d:88:b9 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:kubernetes-upgrade-541086 Clientid:01:52:54:00:2d:88:b9}
	I0229 18:47:35.697515   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | domain kubernetes-upgrade-541086 has defined IP address 192.168.50.47 and MAC address 52:54:00:2d:88:b9 in network mk-kubernetes-upgrade-541086
	I0229 18:47:35.697676   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHPort
	I0229 18:47:35.697863   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHKeyPath
	I0229 18:47:35.698039   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHUsername
	I0229 18:47:35.698191   44658 sshutil.go:53] new ssh client: &{IP:192.168.50.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/kubernetes-upgrade-541086/id_rsa Username:docker}
	I0229 18:47:35.795500   44658 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 18:47:35.800502   44658 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 18:47:35.800531   44658 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6428/.minikube/addons for local assets ...
	I0229 18:47:35.800590   44658 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6428/.minikube/files for local assets ...
	I0229 18:47:35.800659   44658 filesync.go:149] local asset: /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem -> 136512.pem in /etc/ssl/certs
	I0229 18:47:35.800739   44658 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 18:47:35.812025   44658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem --> /etc/ssl/certs/136512.pem (1708 bytes)
	I0229 18:47:35.844255   44658 start.go:303] post-start completed in 150.48136ms
	I0229 18:47:35.844290   44658 fix.go:56] fixHost completed within 6.979221941s
	I0229 18:47:35.844309   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHHostname
	I0229 18:47:35.846682   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | domain kubernetes-upgrade-541086 has defined MAC address 52:54:00:2d:88:b9 in network mk-kubernetes-upgrade-541086
	I0229 18:47:35.847079   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:88:b9", ip: ""} in network mk-kubernetes-upgrade-541086: {Iface:virbr2 ExpiryTime:2024-02-29 19:46:42 +0000 UTC Type:0 Mac:52:54:00:2d:88:b9 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:kubernetes-upgrade-541086 Clientid:01:52:54:00:2d:88:b9}
	I0229 18:47:35.847121   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | domain kubernetes-upgrade-541086 has defined IP address 192.168.50.47 and MAC address 52:54:00:2d:88:b9 in network mk-kubernetes-upgrade-541086
	I0229 18:47:35.847281   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHPort
	I0229 18:47:35.847479   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHKeyPath
	I0229 18:47:35.847680   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHKeyPath
	I0229 18:47:35.847833   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHUsername
	I0229 18:47:35.847990   44658 main.go:141] libmachine: Using SSH client type: native
	I0229 18:47:35.848210   44658 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.47 22 <nil> <nil>}
	I0229 18:47:35.848227   44658 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 18:47:35.977269   44658 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709232455.973593277
	
	I0229 18:47:35.977288   44658 fix.go:206] guest clock: 1709232455.973593277
	I0229 18:47:35.977294   44658 fix.go:219] Guest: 2024-02-29 18:47:35.973593277 +0000 UTC Remote: 2024-02-29 18:47:35.844293887 +0000 UTC m=+29.083472927 (delta=129.29939ms)
	I0229 18:47:35.977313   44658 fix.go:190] guest clock delta is within tolerance: 129.29939ms
	I0229 18:47:35.977318   44658 start.go:83] releasing machines lock for "kubernetes-upgrade-541086", held for 7.112327059s
	I0229 18:47:35.977335   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .DriverName
	I0229 18:47:35.977594   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetIP
	I0229 18:47:35.980018   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | domain kubernetes-upgrade-541086 has defined MAC address 52:54:00:2d:88:b9 in network mk-kubernetes-upgrade-541086
	I0229 18:47:35.980337   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:88:b9", ip: ""} in network mk-kubernetes-upgrade-541086: {Iface:virbr2 ExpiryTime:2024-02-29 19:46:42 +0000 UTC Type:0 Mac:52:54:00:2d:88:b9 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:kubernetes-upgrade-541086 Clientid:01:52:54:00:2d:88:b9}
	I0229 18:47:35.980363   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | domain kubernetes-upgrade-541086 has defined IP address 192.168.50.47 and MAC address 52:54:00:2d:88:b9 in network mk-kubernetes-upgrade-541086
	I0229 18:47:35.980495   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .DriverName
	I0229 18:47:35.980974   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .DriverName
	I0229 18:47:35.981146   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .DriverName
	I0229 18:47:35.981241   44658 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 18:47:35.981282   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHHostname
	I0229 18:47:35.981404   44658 ssh_runner.go:195] Run: cat /version.json
	I0229 18:47:35.981435   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHHostname
	I0229 18:47:35.983822   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | domain kubernetes-upgrade-541086 has defined MAC address 52:54:00:2d:88:b9 in network mk-kubernetes-upgrade-541086
	I0229 18:47:35.984160   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:88:b9", ip: ""} in network mk-kubernetes-upgrade-541086: {Iface:virbr2 ExpiryTime:2024-02-29 19:46:42 +0000 UTC Type:0 Mac:52:54:00:2d:88:b9 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:kubernetes-upgrade-541086 Clientid:01:52:54:00:2d:88:b9}
	I0229 18:47:35.984194   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | domain kubernetes-upgrade-541086 has defined IP address 192.168.50.47 and MAC address 52:54:00:2d:88:b9 in network mk-kubernetes-upgrade-541086
	I0229 18:47:35.984213   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | domain kubernetes-upgrade-541086 has defined MAC address 52:54:00:2d:88:b9 in network mk-kubernetes-upgrade-541086
	I0229 18:47:35.984311   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHPort
	I0229 18:47:35.984520   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHKeyPath
	I0229 18:47:35.984586   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:88:b9", ip: ""} in network mk-kubernetes-upgrade-541086: {Iface:virbr2 ExpiryTime:2024-02-29 19:46:42 +0000 UTC Type:0 Mac:52:54:00:2d:88:b9 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:kubernetes-upgrade-541086 Clientid:01:52:54:00:2d:88:b9}
	I0229 18:47:35.984619   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | domain kubernetes-upgrade-541086 has defined IP address 192.168.50.47 and MAC address 52:54:00:2d:88:b9 in network mk-kubernetes-upgrade-541086
	I0229 18:47:35.984686   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHUsername
	I0229 18:47:35.984762   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHPort
	I0229 18:47:35.984913   44658 sshutil.go:53] new ssh client: &{IP:192.168.50.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/kubernetes-upgrade-541086/id_rsa Username:docker}
	I0229 18:47:35.984921   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHKeyPath
	I0229 18:47:35.985085   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetSSHUsername
	I0229 18:47:35.985216   44658 sshutil.go:53] new ssh client: &{IP:192.168.50.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/kubernetes-upgrade-541086/id_rsa Username:docker}
	I0229 18:47:36.069002   44658 ssh_runner.go:195] Run: systemctl --version
	I0229 18:47:36.093779   44658 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0229 18:47:36.260197   44658 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 18:47:36.267120   44658 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 18:47:36.267176   44658 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 18:47:36.278654   44658 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0229 18:47:36.278685   44658 start.go:475] detecting cgroup driver to use...
	I0229 18:47:36.278745   44658 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 18:47:36.296367   44658 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 18:47:36.311166   44658 docker.go:217] disabling cri-docker service (if available) ...
	I0229 18:47:36.311237   44658 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 18:47:36.325882   44658 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 18:47:36.341377   44658 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 18:47:36.505358   44658 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 18:47:36.658830   44658 docker.go:233] disabling docker service ...
	I0229 18:47:36.658897   44658 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 18:47:36.682673   44658 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 18:47:36.698617   44658 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 18:47:36.854681   44658 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 18:47:37.003777   44658 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 18:47:37.020813   44658 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 18:47:37.041590   44658 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0229 18:47:37.041668   44658 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:47:37.053586   44658 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0229 18:47:37.053652   44658 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:47:37.065345   44658 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:47:37.077758   44658 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:47:37.090188   44658 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 18:47:37.102609   44658 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 18:47:37.113794   44658 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 18:47:37.124699   44658 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:47:37.262026   44658 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0229 18:47:37.513196   44658 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0229 18:47:37.513273   44658 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0229 18:47:37.518765   44658 start.go:543] Will wait 60s for crictl version
	I0229 18:47:37.518824   44658 ssh_runner.go:195] Run: which crictl
	I0229 18:47:37.523447   44658 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 18:47:37.567560   44658 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0229 18:47:37.567639   44658 ssh_runner.go:195] Run: crio --version
	I0229 18:47:37.598634   44658 ssh_runner.go:195] Run: crio --version
	I0229 18:47:37.633342   44658 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0229 18:47:37.634635   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) Calling .GetIP
	I0229 18:47:37.636969   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | domain kubernetes-upgrade-541086 has defined MAC address 52:54:00:2d:88:b9 in network mk-kubernetes-upgrade-541086
	I0229 18:47:37.637290   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:88:b9", ip: ""} in network mk-kubernetes-upgrade-541086: {Iface:virbr2 ExpiryTime:2024-02-29 19:46:42 +0000 UTC Type:0 Mac:52:54:00:2d:88:b9 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:kubernetes-upgrade-541086 Clientid:01:52:54:00:2d:88:b9}
	I0229 18:47:37.637321   44658 main.go:141] libmachine: (kubernetes-upgrade-541086) DBG | domain kubernetes-upgrade-541086 has defined IP address 192.168.50.47 and MAC address 52:54:00:2d:88:b9 in network mk-kubernetes-upgrade-541086
	I0229 18:47:37.637586   44658 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0229 18:47:37.643169   44658 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0229 18:47:37.643223   44658 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 18:47:37.691620   44658 crio.go:496] all images are preloaded for cri-o runtime.
	I0229 18:47:37.691645   44658 crio.go:415] Images already preloaded, skipping extraction
	I0229 18:47:37.691700   44658 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 18:47:37.734589   44658 crio.go:496] all images are preloaded for cri-o runtime.
	I0229 18:47:37.734612   44658 cache_images.go:84] Images are preloaded, skipping loading
	I0229 18:47:37.734673   44658 ssh_runner.go:195] Run: crio config
	I0229 18:47:37.795571   44658 cni.go:84] Creating CNI manager for ""
	I0229 18:47:37.795596   44658 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 18:47:37.795619   44658 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 18:47:37.795640   44658 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.47 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-541086 NodeName:kubernetes-upgrade-541086 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.47"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.47 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 18:47:37.795803   44658 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.47
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-541086"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.47
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.47"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 18:47:37.795893   44658 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-541086 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.47
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-541086 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 18:47:37.795963   44658 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0229 18:47:37.807853   44658 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 18:47:37.807929   44658 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 18:47:37.818514   44658 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (389 bytes)
	I0229 18:47:37.840042   44658 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0229 18:47:37.921265   44658 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2114 bytes)
	I0229 18:47:38.129528   44658 ssh_runner.go:195] Run: grep 192.168.50.47	control-plane.minikube.internal$ /etc/hosts
	I0229 18:47:38.188293   44658 certs.go:56] Setting up /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/kubernetes-upgrade-541086 for IP: 192.168.50.47
	I0229 18:47:38.188334   44658 certs.go:190] acquiring lock for shared ca certs: {Name:mk3505d2468da66ac20dc3ebf913782cecf1ba0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:47:38.188533   44658 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18259-6428/.minikube/ca.key
	I0229 18:47:38.188585   44658 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.key
	I0229 18:47:38.188683   44658 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/kubernetes-upgrade-541086/client.key
	I0229 18:47:38.188747   44658 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/kubernetes-upgrade-541086/apiserver.key.6a3aec60
	I0229 18:47:38.188802   44658 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/kubernetes-upgrade-541086/proxy-client.key
	I0229 18:47:38.188938   44658 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651.pem (1338 bytes)
	W0229 18:47:38.188976   44658 certs.go:433] ignoring /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651_empty.pem, impossibly tiny 0 bytes
	I0229 18:47:38.188991   44658 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem (1675 bytes)
	I0229 18:47:38.189036   44658 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem (1082 bytes)
	I0229 18:47:38.189069   44658 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem (1123 bytes)
	I0229 18:47:38.189099   44658 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem (1675 bytes)
	I0229 18:47:38.189144   44658 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem (1708 bytes)
	I0229 18:47:38.189782   44658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/kubernetes-upgrade-541086/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 18:47:38.454124   44658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/kubernetes-upgrade-541086/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0229 18:47:38.651538   44658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/kubernetes-upgrade-541086/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 18:47:38.780621   44658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/kubernetes-upgrade-541086/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0229 18:47:38.836769   44658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 18:47:38.895831   44658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0229 18:47:39.056983   44658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 18:47:39.118313   44658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0229 18:47:39.183904   44658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 18:47:39.255382   44658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651.pem --> /usr/share/ca-certificates/13651.pem (1338 bytes)
	I0229 18:47:39.328104   44658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem --> /usr/share/ca-certificates/136512.pem (1708 bytes)
	I0229 18:47:39.369999   44658 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 18:47:39.392699   44658 ssh_runner.go:195] Run: openssl version
	I0229 18:47:39.399813   44658 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 18:47:39.425703   44658 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:47:39.431337   44658 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 17:40 /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:47:39.431396   44658 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:47:39.438825   44658 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 18:47:39.452804   44658 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13651.pem && ln -fs /usr/share/ca-certificates/13651.pem /etc/ssl/certs/13651.pem"
	I0229 18:47:39.467996   44658 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13651.pem
	I0229 18:47:39.473939   44658 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 17:49 /usr/share/ca-certificates/13651.pem
	I0229 18:47:39.474002   44658 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13651.pem
	I0229 18:47:39.482156   44658 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13651.pem /etc/ssl/certs/51391683.0"
	I0229 18:47:39.499249   44658 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136512.pem && ln -fs /usr/share/ca-certificates/136512.pem /etc/ssl/certs/136512.pem"
	I0229 18:47:39.515821   44658 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136512.pem
	I0229 18:47:39.523964   44658 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 17:49 /usr/share/ca-certificates/136512.pem
	I0229 18:47:39.524036   44658 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136512.pem
	I0229 18:47:39.534820   44658 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136512.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 18:47:39.550167   44658 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 18:47:39.555291   44658 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0229 18:47:39.562773   44658 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0229 18:47:39.571511   44658 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0229 18:47:39.581564   44658 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0229 18:47:39.589166   44658 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0229 18:47:39.597367   44658 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0229 18:47:39.608232   44658 kubeadm.go:404] StartCluster: {Name:kubernetes-upgrade-541086 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-541086 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.47 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:47:39.608340   44658 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0229 18:47:39.608401   44658 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 18:47:39.711882   44658 cri.go:89] found id: "1b2048f4c8b2adf1520414b4176e24acd35d5e0462ad815732f56c531beb7b3e"
	I0229 18:47:39.711905   44658 cri.go:89] found id: "24d883071673223f3d6978a8ab5c26cd34a5a2a24fdb8d08c1cd6560f86afc8e"
	I0229 18:47:39.711911   44658 cri.go:89] found id: "67707e00b939b3568b1335bd39723c230cd328b69d7d8cd2d699c5e4c2a67264"
	I0229 18:47:39.711915   44658 cri.go:89] found id: "dbbe351e9027037265ec73e35f35127650378f5d09e3a51bf3b9ecc3098e55b6"
	I0229 18:47:39.711920   44658 cri.go:89] found id: "752b1c671d063a5651136fee766116191348aff8afaf6199b561f683bbfc9c66"
	I0229 18:47:39.711924   44658 cri.go:89] found id: "3b090033d2286d8ad4fb1c9a3679869434944ec8255251584d7d7cd763f15d60"
	I0229 18:47:39.711928   44658 cri.go:89] found id: "f4b9cbed41feddd1bdbcc0d87a938811598d892283ef304229cda03d9bef442c"
	I0229 18:47:39.711933   44658 cri.go:89] found id: "4e974c883c4fc2273157b59e6441898a8136383a5a1f61637e41d55faa5c37e0"
	I0229 18:47:39.711937   44658 cri.go:89] found id: "b1a2e94f32751d4dab8cd5a1b299913bb68351ea591477ccad336644d3a57df6"
	I0229 18:47:39.711943   44658 cri.go:89] found id: "54dc7ac706b462a2bacef2d0db58d03aa831304bdc62584db121cdd34ac0427c"
	I0229 18:47:39.711947   44658 cri.go:89] found id: "d1bcc193b5d4fd332626af1ef37ccc1145a0fd557753eca9e81f851c8dfd1086"
	I0229 18:47:39.711951   44658 cri.go:89] found id: "7aed1d60751b79345babb87c0fab30381780d0749ed91fbb22c7746e3d989184"
	I0229 18:47:39.711957   44658 cri.go:89] found id: ""
	I0229 18:47:39.712004   44658 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Feb 29 18:47:45 kubernetes-upgrade-541086 crio[1933]: time="2024-02-29 18:47:45.316652982Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709232465316611118,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121256,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=71a85e8b-d6ff-40a9-a059-225dc4bab4e7 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 18:47:45 kubernetes-upgrade-541086 crio[1933]: time="2024-02-29 18:47:45.318110929Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d1a68077-dd5d-4937-ac24-8d4f0ef23710 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 18:47:45 kubernetes-upgrade-541086 crio[1933]: time="2024-02-29 18:47:45.318199231Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d1a68077-dd5d-4937-ac24-8d4f0ef23710 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 18:47:45 kubernetes-upgrade-541086 crio[1933]: time="2024-02-29 18:47:45.318813726Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f1febe0a59aad4577d29c0ca64b5d5cab898066646748684808df0b3b841d4d0,PodSandboxId:6d1dc8187cf83cca0e8783e70013602364c5f75213756841f265d12a952241b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1709232459178565250,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-drwqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f75b3ce5-00bd-442d-ba5b-b3503a1199e0,},Annotations:map[string]string{io.kubernetes.container.hash: 81ed03b0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1a2e44b58141c1578b00b111d6cc91bb603cca576fa3093a17ec8edd2661fbc,PodSandboxId:6b521ed87a866a33e883439f4f94a55bab34521bd2bfb3eb7ff80ba705694be1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709232458498964762,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5607dd4d-b5fc-4397-9474-d2303e89dd7e,},Annotations:map[string]string{io.kubernetes.container.hash: a0b54e49,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b2048f4c8b2adf1520414b4176e24acd35d5e0462ad815732f56c531beb7b3e,PodSandboxId:0f8be6ac7f63cc8be39fdc49bda4d84c68ef22ec82cb7e771ff2807fd358a876,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1709232458920884784,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-dxbmj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ae3e264-1db8-4a9f-a9a4-84aed40d5d21,},Annotations:map[string]string{io.kubernetes.container.hash: 7a09ec24,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24d883071673223f3d6978a8ab5c26cd34a5a2a24fdb8d08c1cd6560f86afc8e,PodSandboxId:51855b982056e23aaa07ed39d85e9c0a71e2f9b6b9c8ad7c6e11562a41f5ba87,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1709232458509667582,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-541086,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 6d98d46c0f0c509f37247bcdb78ec6e1,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67707e00b939b3568b1335bd39723c230cd328b69d7d8cd2d699c5e4c2a67264,PodSandboxId:ba08e1f06c6a4e8a2d255fb4adfddb3ba55a1ee4a8da45b8b877d28e1d9f4039,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1709232458460208063,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-541086,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785a
ad4c6acf5e0010f0060d25a0b1bd,},Annotations:map[string]string{io.kubernetes.container.hash: e41bdb79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbbe351e9027037265ec73e35f35127650378f5d09e3a51bf3b9ecc3098e55b6,PodSandboxId:344d1ffd0ed31073c20f52a19cc0d5897fc6b4cb994158d179ec604d0b0bedfe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1709232458260137288,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-541086,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d434d718
961abfa626899f7db129cb1,},Annotations:map[string]string{io.kubernetes.container.hash: f05276f4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:752b1c671d063a5651136fee766116191348aff8afaf6199b561f683bbfc9c66,PodSandboxId:2178e924758b9614c6a2bc52ec3e6a53eb7b3c69f08a2aad4dd6febbba018e3f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1709232458230842645,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-541086,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b996d4316c416
9149dd401dcf04722a,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b090033d2286d8ad4fb1c9a3679869434944ec8255251584d7d7cd763f15d60,PodSandboxId:30d4dca2e42e504f8233e399e90b2f3e5161f82d7a7fac7a70abd4e48c253243,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1709232439659303086,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-dxbmj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ae3e264-1db8-4a9f-a9a4-84aed40d5d21,},Annotations
:map[string]string{io.kubernetes.container.hash: 7a09ec24,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4b9cbed41feddd1bdbcc0d87a938811598d892283ef304229cda03d9bef442c,PodSandboxId:4d4c038357226c44e1b51c27e3834aecc5d7f2ed6891da130ce5abf25889ecff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_EXITED,CreatedAt:1709232439489782130,Labels:map[string]string{io.k
ubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-drwqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f75b3ce5-00bd-442d-ba5b-b3503a1199e0,},Annotations:map[string]string{io.kubernetes.container.hash: 81ed03b0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e974c883c4fc2273157b59e6441898a8136383a5a1f61637e41d55faa5c37e0,PodSandboxId:0e87f0d89a91fe724f4ad0d87b8ee2fb6b3b3f905239dbbfe53f8fff2b443f4b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1709232439216548169,Labels:map[string]string{io.kubernetes.container.name:
storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5607dd4d-b5fc-4397-9474-d2303e89dd7e,},Annotations:map[string]string{io.kubernetes.container.hash: a0b54e49,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54dc7ac706b462a2bacef2d0db58d03aa831304bdc62584db121cdd34ac0427c,PodSandboxId:c3ca81243a9a60f911b6a4c999eb570d06a0bf117fc05f3e4d3f8b990ad163fc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_EXITED,CreatedAt:1709232420357514588,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io
.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-541086,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d434d718961abfa626899f7db129cb1,},Annotations:map[string]string{io.kubernetes.container.hash: f05276f4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1a2e94f32751d4dab8cd5a1b299913bb68351ea591477ccad336644d3a57df6,PodSandboxId:7c48138ed5a68e017e89d2e1b98b789907cd1097d3855cc46784e5940bdfc883,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_EXITED,CreatedAt:1709232420371297277,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd
-kubernetes-upgrade-541086,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785aad4c6acf5e0010f0060d25a0b1bd,},Annotations:map[string]string{io.kubernetes.container.hash: e41bdb79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1bcc193b5d4fd332626af1ef37ccc1145a0fd557753eca9e81f851c8dfd1086,PodSandboxId:d144f45d7342d854b542e36a5031cb85bb27623f49d2a82b123abf6a8840a866,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_EXITED,CreatedAt:1709232420353614164,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kuber
netes-upgrade-541086,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b996d4316c4169149dd401dcf04722a,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7aed1d60751b79345babb87c0fab30381780d0749ed91fbb22c7746e3d989184,PodSandboxId:31a66b275aeda1bd9dd1bed9e9dfd4517281df3c98f4d7609aff92606d0e7fa4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_EXITED,CreatedAt:1709232420240228309,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-con
troller-manager-kubernetes-upgrade-541086,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d98d46c0f0c509f37247bcdb78ec6e1,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d1a68077-dd5d-4937-ac24-8d4f0ef23710 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 18:47:45 kubernetes-upgrade-541086 crio[1933]: time="2024-02-29 18:47:45.388759636Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=75770f00-b29f-49fe-a5b0-a7a9f78d66fb name=/runtime.v1.RuntimeService/Version
	Feb 29 18:47:45 kubernetes-upgrade-541086 crio[1933]: time="2024-02-29 18:47:45.388915570Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=75770f00-b29f-49fe-a5b0-a7a9f78d66fb name=/runtime.v1.RuntimeService/Version
	Feb 29 18:47:45 kubernetes-upgrade-541086 crio[1933]: time="2024-02-29 18:47:45.390982603Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=88607c62-0a2a-4797-8c10-8decc2bf9f9c name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 18:47:45 kubernetes-upgrade-541086 crio[1933]: time="2024-02-29 18:47:45.391680508Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709232465391645787,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121256,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=88607c62-0a2a-4797-8c10-8decc2bf9f9c name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 18:47:45 kubernetes-upgrade-541086 crio[1933]: time="2024-02-29 18:47:45.392426537Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=46e15e0b-0021-4fe6-8835-30e309e5387e name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 18:47:45 kubernetes-upgrade-541086 crio[1933]: time="2024-02-29 18:47:45.392534039Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=46e15e0b-0021-4fe6-8835-30e309e5387e name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 18:47:45 kubernetes-upgrade-541086 crio[1933]: time="2024-02-29 18:47:45.392905689Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f1febe0a59aad4577d29c0ca64b5d5cab898066646748684808df0b3b841d4d0,PodSandboxId:6d1dc8187cf83cca0e8783e70013602364c5f75213756841f265d12a952241b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1709232459178565250,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-drwqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f75b3ce5-00bd-442d-ba5b-b3503a1199e0,},Annotations:map[string]string{io.kubernetes.container.hash: 81ed03b0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1a2e44b58141c1578b00b111d6cc91bb603cca576fa3093a17ec8edd2661fbc,PodSandboxId:6b521ed87a866a33e883439f4f94a55bab34521bd2bfb3eb7ff80ba705694be1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709232458498964762,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5607dd4d-b5fc-4397-9474-d2303e89dd7e,},Annotations:map[string]string{io.kubernetes.container.hash: a0b54e49,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b2048f4c8b2adf1520414b4176e24acd35d5e0462ad815732f56c531beb7b3e,PodSandboxId:0f8be6ac7f63cc8be39fdc49bda4d84c68ef22ec82cb7e771ff2807fd358a876,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1709232458920884784,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-dxbmj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ae3e264-1db8-4a9f-a9a4-84aed40d5d21,},Annotations:map[string]string{io.kubernetes.container.hash: 7a09ec24,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24d883071673223f3d6978a8ab5c26cd34a5a2a24fdb8d08c1cd6560f86afc8e,PodSandboxId:51855b982056e23aaa07ed39d85e9c0a71e2f9b6b9c8ad7c6e11562a41f5ba87,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1709232458509667582,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-541086,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 6d98d46c0f0c509f37247bcdb78ec6e1,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67707e00b939b3568b1335bd39723c230cd328b69d7d8cd2d699c5e4c2a67264,PodSandboxId:ba08e1f06c6a4e8a2d255fb4adfddb3ba55a1ee4a8da45b8b877d28e1d9f4039,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1709232458460208063,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-541086,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785a
ad4c6acf5e0010f0060d25a0b1bd,},Annotations:map[string]string{io.kubernetes.container.hash: e41bdb79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbbe351e9027037265ec73e35f35127650378f5d09e3a51bf3b9ecc3098e55b6,PodSandboxId:344d1ffd0ed31073c20f52a19cc0d5897fc6b4cb994158d179ec604d0b0bedfe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1709232458260137288,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-541086,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d434d718
961abfa626899f7db129cb1,},Annotations:map[string]string{io.kubernetes.container.hash: f05276f4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:752b1c671d063a5651136fee766116191348aff8afaf6199b561f683bbfc9c66,PodSandboxId:2178e924758b9614c6a2bc52ec3e6a53eb7b3c69f08a2aad4dd6febbba018e3f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1709232458230842645,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-541086,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b996d4316c416
9149dd401dcf04722a,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b090033d2286d8ad4fb1c9a3679869434944ec8255251584d7d7cd763f15d60,PodSandboxId:30d4dca2e42e504f8233e399e90b2f3e5161f82d7a7fac7a70abd4e48c253243,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1709232439659303086,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-dxbmj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ae3e264-1db8-4a9f-a9a4-84aed40d5d21,},Annotations
:map[string]string{io.kubernetes.container.hash: 7a09ec24,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4b9cbed41feddd1bdbcc0d87a938811598d892283ef304229cda03d9bef442c,PodSandboxId:4d4c038357226c44e1b51c27e3834aecc5d7f2ed6891da130ce5abf25889ecff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_EXITED,CreatedAt:1709232439489782130,Labels:map[string]string{io.k
ubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-drwqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f75b3ce5-00bd-442d-ba5b-b3503a1199e0,},Annotations:map[string]string{io.kubernetes.container.hash: 81ed03b0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e974c883c4fc2273157b59e6441898a8136383a5a1f61637e41d55faa5c37e0,PodSandboxId:0e87f0d89a91fe724f4ad0d87b8ee2fb6b3b3f905239dbbfe53f8fff2b443f4b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1709232439216548169,Labels:map[string]string{io.kubernetes.container.name:
storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5607dd4d-b5fc-4397-9474-d2303e89dd7e,},Annotations:map[string]string{io.kubernetes.container.hash: a0b54e49,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54dc7ac706b462a2bacef2d0db58d03aa831304bdc62584db121cdd34ac0427c,PodSandboxId:c3ca81243a9a60f911b6a4c999eb570d06a0bf117fc05f3e4d3f8b990ad163fc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_EXITED,CreatedAt:1709232420357514588,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io
.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-541086,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d434d718961abfa626899f7db129cb1,},Annotations:map[string]string{io.kubernetes.container.hash: f05276f4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1a2e94f32751d4dab8cd5a1b299913bb68351ea591477ccad336644d3a57df6,PodSandboxId:7c48138ed5a68e017e89d2e1b98b789907cd1097d3855cc46784e5940bdfc883,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_EXITED,CreatedAt:1709232420371297277,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd
-kubernetes-upgrade-541086,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785aad4c6acf5e0010f0060d25a0b1bd,},Annotations:map[string]string{io.kubernetes.container.hash: e41bdb79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1bcc193b5d4fd332626af1ef37ccc1145a0fd557753eca9e81f851c8dfd1086,PodSandboxId:d144f45d7342d854b542e36a5031cb85bb27623f49d2a82b123abf6a8840a866,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_EXITED,CreatedAt:1709232420353614164,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kuber
netes-upgrade-541086,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b996d4316c4169149dd401dcf04722a,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7aed1d60751b79345babb87c0fab30381780d0749ed91fbb22c7746e3d989184,PodSandboxId:31a66b275aeda1bd9dd1bed9e9dfd4517281df3c98f4d7609aff92606d0e7fa4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_EXITED,CreatedAt:1709232420240228309,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-con
troller-manager-kubernetes-upgrade-541086,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d98d46c0f0c509f37247bcdb78ec6e1,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=46e15e0b-0021-4fe6-8835-30e309e5387e name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 18:47:45 kubernetes-upgrade-541086 crio[1933]: time="2024-02-29 18:47:45.461651916Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ca55f2d5-8b7c-406a-8866-f4cea653a8e0 name=/runtime.v1.RuntimeService/Version
	Feb 29 18:47:45 kubernetes-upgrade-541086 crio[1933]: time="2024-02-29 18:47:45.461792832Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ca55f2d5-8b7c-406a-8866-f4cea653a8e0 name=/runtime.v1.RuntimeService/Version
	Feb 29 18:47:45 kubernetes-upgrade-541086 crio[1933]: time="2024-02-29 18:47:45.463682172Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=666d882b-3cf0-49e4-a227-a3f0386fcb52 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 18:47:45 kubernetes-upgrade-541086 crio[1933]: time="2024-02-29 18:47:45.464334623Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709232465464298197,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121256,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=666d882b-3cf0-49e4-a227-a3f0386fcb52 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 18:47:45 kubernetes-upgrade-541086 crio[1933]: time="2024-02-29 18:47:45.464939269Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=892d7bf4-70f8-4771-8678-3e89eb8c8f3b name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 18:47:45 kubernetes-upgrade-541086 crio[1933]: time="2024-02-29 18:47:45.465099240Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=892d7bf4-70f8-4771-8678-3e89eb8c8f3b name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 18:47:45 kubernetes-upgrade-541086 crio[1933]: time="2024-02-29 18:47:45.465640650Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f1febe0a59aad4577d29c0ca64b5d5cab898066646748684808df0b3b841d4d0,PodSandboxId:6d1dc8187cf83cca0e8783e70013602364c5f75213756841f265d12a952241b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1709232459178565250,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-drwqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f75b3ce5-00bd-442d-ba5b-b3503a1199e0,},Annotations:map[string]string{io.kubernetes.container.hash: 81ed03b0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1a2e44b58141c1578b00b111d6cc91bb603cca576fa3093a17ec8edd2661fbc,PodSandboxId:6b521ed87a866a33e883439f4f94a55bab34521bd2bfb3eb7ff80ba705694be1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709232458498964762,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5607dd4d-b5fc-4397-9474-d2303e89dd7e,},Annotations:map[string]string{io.kubernetes.container.hash: a0b54e49,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b2048f4c8b2adf1520414b4176e24acd35d5e0462ad815732f56c531beb7b3e,PodSandboxId:0f8be6ac7f63cc8be39fdc49bda4d84c68ef22ec82cb7e771ff2807fd358a876,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1709232458920884784,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-dxbmj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ae3e264-1db8-4a9f-a9a4-84aed40d5d21,},Annotations:map[string]string{io.kubernetes.container.hash: 7a09ec24,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24d883071673223f3d6978a8ab5c26cd34a5a2a24fdb8d08c1cd6560f86afc8e,PodSandboxId:51855b982056e23aaa07ed39d85e9c0a71e2f9b6b9c8ad7c6e11562a41f5ba87,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1709232458509667582,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-541086,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 6d98d46c0f0c509f37247bcdb78ec6e1,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67707e00b939b3568b1335bd39723c230cd328b69d7d8cd2d699c5e4c2a67264,PodSandboxId:ba08e1f06c6a4e8a2d255fb4adfddb3ba55a1ee4a8da45b8b877d28e1d9f4039,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1709232458460208063,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-541086,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785a
ad4c6acf5e0010f0060d25a0b1bd,},Annotations:map[string]string{io.kubernetes.container.hash: e41bdb79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbbe351e9027037265ec73e35f35127650378f5d09e3a51bf3b9ecc3098e55b6,PodSandboxId:344d1ffd0ed31073c20f52a19cc0d5897fc6b4cb994158d179ec604d0b0bedfe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1709232458260137288,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-541086,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d434d718
961abfa626899f7db129cb1,},Annotations:map[string]string{io.kubernetes.container.hash: f05276f4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:752b1c671d063a5651136fee766116191348aff8afaf6199b561f683bbfc9c66,PodSandboxId:2178e924758b9614c6a2bc52ec3e6a53eb7b3c69f08a2aad4dd6febbba018e3f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1709232458230842645,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-541086,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b996d4316c416
9149dd401dcf04722a,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b090033d2286d8ad4fb1c9a3679869434944ec8255251584d7d7cd763f15d60,PodSandboxId:30d4dca2e42e504f8233e399e90b2f3e5161f82d7a7fac7a70abd4e48c253243,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1709232439659303086,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-dxbmj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ae3e264-1db8-4a9f-a9a4-84aed40d5d21,},Annotations
:map[string]string{io.kubernetes.container.hash: 7a09ec24,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4b9cbed41feddd1bdbcc0d87a938811598d892283ef304229cda03d9bef442c,PodSandboxId:4d4c038357226c44e1b51c27e3834aecc5d7f2ed6891da130ce5abf25889ecff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_EXITED,CreatedAt:1709232439489782130,Labels:map[string]string{io.k
ubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-drwqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f75b3ce5-00bd-442d-ba5b-b3503a1199e0,},Annotations:map[string]string{io.kubernetes.container.hash: 81ed03b0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e974c883c4fc2273157b59e6441898a8136383a5a1f61637e41d55faa5c37e0,PodSandboxId:0e87f0d89a91fe724f4ad0d87b8ee2fb6b3b3f905239dbbfe53f8fff2b443f4b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1709232439216548169,Labels:map[string]string{io.kubernetes.container.name:
storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5607dd4d-b5fc-4397-9474-d2303e89dd7e,},Annotations:map[string]string{io.kubernetes.container.hash: a0b54e49,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54dc7ac706b462a2bacef2d0db58d03aa831304bdc62584db121cdd34ac0427c,PodSandboxId:c3ca81243a9a60f911b6a4c999eb570d06a0bf117fc05f3e4d3f8b990ad163fc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_EXITED,CreatedAt:1709232420357514588,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io
.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-541086,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d434d718961abfa626899f7db129cb1,},Annotations:map[string]string{io.kubernetes.container.hash: f05276f4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1a2e94f32751d4dab8cd5a1b299913bb68351ea591477ccad336644d3a57df6,PodSandboxId:7c48138ed5a68e017e89d2e1b98b789907cd1097d3855cc46784e5940bdfc883,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_EXITED,CreatedAt:1709232420371297277,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd
-kubernetes-upgrade-541086,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785aad4c6acf5e0010f0060d25a0b1bd,},Annotations:map[string]string{io.kubernetes.container.hash: e41bdb79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1bcc193b5d4fd332626af1ef37ccc1145a0fd557753eca9e81f851c8dfd1086,PodSandboxId:d144f45d7342d854b542e36a5031cb85bb27623f49d2a82b123abf6a8840a866,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_EXITED,CreatedAt:1709232420353614164,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kuber
netes-upgrade-541086,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b996d4316c4169149dd401dcf04722a,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7aed1d60751b79345babb87c0fab30381780d0749ed91fbb22c7746e3d989184,PodSandboxId:31a66b275aeda1bd9dd1bed9e9dfd4517281df3c98f4d7609aff92606d0e7fa4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_EXITED,CreatedAt:1709232420240228309,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-con
troller-manager-kubernetes-upgrade-541086,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d98d46c0f0c509f37247bcdb78ec6e1,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=892d7bf4-70f8-4771-8678-3e89eb8c8f3b name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 18:47:45 kubernetes-upgrade-541086 crio[1933]: time="2024-02-29 18:47:45.512394671Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8f342b5a-8518-48d5-ba78-481d0515c4ea name=/runtime.v1.RuntimeService/Version
	Feb 29 18:47:45 kubernetes-upgrade-541086 crio[1933]: time="2024-02-29 18:47:45.512563105Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8f342b5a-8518-48d5-ba78-481d0515c4ea name=/runtime.v1.RuntimeService/Version
	Feb 29 18:47:45 kubernetes-upgrade-541086 crio[1933]: time="2024-02-29 18:47:45.514644648Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c85a043f-9675-453f-8993-b84540a71153 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 18:47:45 kubernetes-upgrade-541086 crio[1933]: time="2024-02-29 18:47:45.515233142Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709232465515200669,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121256,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c85a043f-9675-453f-8993-b84540a71153 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 18:47:45 kubernetes-upgrade-541086 crio[1933]: time="2024-02-29 18:47:45.516308552Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1c65c85e-42d2-4b6a-b590-34f5824f7130 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 18:47:45 kubernetes-upgrade-541086 crio[1933]: time="2024-02-29 18:47:45.516363698Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1c65c85e-42d2-4b6a-b590-34f5824f7130 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 18:47:45 kubernetes-upgrade-541086 crio[1933]: time="2024-02-29 18:47:45.516673026Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f1febe0a59aad4577d29c0ca64b5d5cab898066646748684808df0b3b841d4d0,PodSandboxId:6d1dc8187cf83cca0e8783e70013602364c5f75213756841f265d12a952241b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1709232459178565250,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-drwqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f75b3ce5-00bd-442d-ba5b-b3503a1199e0,},Annotations:map[string]string{io.kubernetes.container.hash: 81ed03b0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1a2e44b58141c1578b00b111d6cc91bb603cca576fa3093a17ec8edd2661fbc,PodSandboxId:6b521ed87a866a33e883439f4f94a55bab34521bd2bfb3eb7ff80ba705694be1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709232458498964762,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5607dd4d-b5fc-4397-9474-d2303e89dd7e,},Annotations:map[string]string{io.kubernetes.container.hash: a0b54e49,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b2048f4c8b2adf1520414b4176e24acd35d5e0462ad815732f56c531beb7b3e,PodSandboxId:0f8be6ac7f63cc8be39fdc49bda4d84c68ef22ec82cb7e771ff2807fd358a876,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1709232458920884784,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-dxbmj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ae3e264-1db8-4a9f-a9a4-84aed40d5d21,},Annotations:map[string]string{io.kubernetes.container.hash: 7a09ec24,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24d883071673223f3d6978a8ab5c26cd34a5a2a24fdb8d08c1cd6560f86afc8e,PodSandboxId:51855b982056e23aaa07ed39d85e9c0a71e2f9b6b9c8ad7c6e11562a41f5ba87,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1709232458509667582,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-541086,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 6d98d46c0f0c509f37247bcdb78ec6e1,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67707e00b939b3568b1335bd39723c230cd328b69d7d8cd2d699c5e4c2a67264,PodSandboxId:ba08e1f06c6a4e8a2d255fb4adfddb3ba55a1ee4a8da45b8b877d28e1d9f4039,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1709232458460208063,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-541086,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785a
ad4c6acf5e0010f0060d25a0b1bd,},Annotations:map[string]string{io.kubernetes.container.hash: e41bdb79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbbe351e9027037265ec73e35f35127650378f5d09e3a51bf3b9ecc3098e55b6,PodSandboxId:344d1ffd0ed31073c20f52a19cc0d5897fc6b4cb994158d179ec604d0b0bedfe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1709232458260137288,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-541086,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d434d718
961abfa626899f7db129cb1,},Annotations:map[string]string{io.kubernetes.container.hash: f05276f4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:752b1c671d063a5651136fee766116191348aff8afaf6199b561f683bbfc9c66,PodSandboxId:2178e924758b9614c6a2bc52ec3e6a53eb7b3c69f08a2aad4dd6febbba018e3f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1709232458230842645,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-541086,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b996d4316c416
9149dd401dcf04722a,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b090033d2286d8ad4fb1c9a3679869434944ec8255251584d7d7cd763f15d60,PodSandboxId:30d4dca2e42e504f8233e399e90b2f3e5161f82d7a7fac7a70abd4e48c253243,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1709232439659303086,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-dxbmj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ae3e264-1db8-4a9f-a9a4-84aed40d5d21,},Annotations
:map[string]string{io.kubernetes.container.hash: 7a09ec24,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4b9cbed41feddd1bdbcc0d87a938811598d892283ef304229cda03d9bef442c,PodSandboxId:4d4c038357226c44e1b51c27e3834aecc5d7f2ed6891da130ce5abf25889ecff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_EXITED,CreatedAt:1709232439489782130,Labels:map[string]string{io.k
ubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-drwqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f75b3ce5-00bd-442d-ba5b-b3503a1199e0,},Annotations:map[string]string{io.kubernetes.container.hash: 81ed03b0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e974c883c4fc2273157b59e6441898a8136383a5a1f61637e41d55faa5c37e0,PodSandboxId:0e87f0d89a91fe724f4ad0d87b8ee2fb6b3b3f905239dbbfe53f8fff2b443f4b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1709232439216548169,Labels:map[string]string{io.kubernetes.container.name:
storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5607dd4d-b5fc-4397-9474-d2303e89dd7e,},Annotations:map[string]string{io.kubernetes.container.hash: a0b54e49,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54dc7ac706b462a2bacef2d0db58d03aa831304bdc62584db121cdd34ac0427c,PodSandboxId:c3ca81243a9a60f911b6a4c999eb570d06a0bf117fc05f3e4d3f8b990ad163fc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_EXITED,CreatedAt:1709232420357514588,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io
.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-541086,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d434d718961abfa626899f7db129cb1,},Annotations:map[string]string{io.kubernetes.container.hash: f05276f4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1a2e94f32751d4dab8cd5a1b299913bb68351ea591477ccad336644d3a57df6,PodSandboxId:7c48138ed5a68e017e89d2e1b98b789907cd1097d3855cc46784e5940bdfc883,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_EXITED,CreatedAt:1709232420371297277,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd
-kubernetes-upgrade-541086,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785aad4c6acf5e0010f0060d25a0b1bd,},Annotations:map[string]string{io.kubernetes.container.hash: e41bdb79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1bcc193b5d4fd332626af1ef37ccc1145a0fd557753eca9e81f851c8dfd1086,PodSandboxId:d144f45d7342d854b542e36a5031cb85bb27623f49d2a82b123abf6a8840a866,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_EXITED,CreatedAt:1709232420353614164,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kuber
netes-upgrade-541086,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b996d4316c4169149dd401dcf04722a,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7aed1d60751b79345babb87c0fab30381780d0749ed91fbb22c7746e3d989184,PodSandboxId:31a66b275aeda1bd9dd1bed9e9dfd4517281df3c98f4d7609aff92606d0e7fa4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_EXITED,CreatedAt:1709232420240228309,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-con
troller-manager-kubernetes-upgrade-541086,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d98d46c0f0c509f37247bcdb78ec6e1,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1c65c85e-42d2-4b6a-b590-34f5824f7130 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f1febe0a59aad       cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834   6 seconds ago       Running             kube-proxy                1                   6d1dc8187cf83       kube-proxy-drwqq
	1b2048f4c8b2a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   6 seconds ago       Running             coredns                   1                   0f8be6ac7f63c       coredns-76f75df574-dxbmj
	24d8830716732       d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d   7 seconds ago       Running             kube-controller-manager   1                   51855b982056e       kube-controller-manager-kubernetes-upgrade-541086
	a1a2e44b58141       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   7 seconds ago       Running             storage-provisioner       1                   6b521ed87a866       storage-provisioner
	67707e00b939b       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7   7 seconds ago       Running             etcd                      1                   ba08e1f06c6a4       etcd-kubernetes-upgrade-541086
	dbbe351e90270       bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f   7 seconds ago       Running             kube-apiserver            1                   344d1ffd0ed31       kube-apiserver-kubernetes-upgrade-541086
	752b1c671d063       4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210   7 seconds ago       Running             kube-scheduler            1                   2178e924758b9       kube-scheduler-kubernetes-upgrade-541086
	3b090033d2286       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   25 seconds ago      Exited              coredns                   0                   30d4dca2e42e5       coredns-76f75df574-dxbmj
	f4b9cbed41fed       cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834   26 seconds ago      Exited              kube-proxy                0                   4d4c038357226       kube-proxy-drwqq
	4e974c883c4fc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   26 seconds ago      Exited              storage-provisioner       0                   0e87f0d89a91f       storage-provisioner
	b1a2e94f32751       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7   45 seconds ago      Exited              etcd                      0                   7c48138ed5a68       etcd-kubernetes-upgrade-541086
	54dc7ac706b46       bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f   45 seconds ago      Exited              kube-apiserver            0                   c3ca81243a9a6       kube-apiserver-kubernetes-upgrade-541086
	d1bcc193b5d4f       4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210   45 seconds ago      Exited              kube-scheduler            0                   d144f45d7342d       kube-scheduler-kubernetes-upgrade-541086
	7aed1d60751b7       d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d   45 seconds ago      Exited              kube-controller-manager   0                   31a66b275aeda       kube-controller-manager-kubernetes-upgrade-541086
	
	
	==> coredns [1b2048f4c8b2adf1520414b4176e24acd35d5e0462ad815732f56c531beb7b3e] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:54366 - 3951 "HINFO IN 5628997112434304586.2815288058179950265. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.02045245s
	
	
	==> coredns [3b090033d2286d8ad4fb1c9a3679869434944ec8255251584d7d7cd763f15d60] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:57205 - 24483 "HINFO IN 8498788748583129158.8472563996126944661. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012817615s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-541086
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-541086
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Feb 2024 18:47:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-541086
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Feb 2024 18:47:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Feb 2024 18:47:23 +0000   Thu, 29 Feb 2024 18:47:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Feb 2024 18:47:23 +0000   Thu, 29 Feb 2024 18:47:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Feb 2024 18:47:23 +0000   Thu, 29 Feb 2024 18:47:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Feb 2024 18:47:23 +0000   Thu, 29 Feb 2024 18:47:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.47
	  Hostname:    kubernetes-upgrade-541086
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 42bbe602e3b8409db7bdc1413734780b
	  System UUID:                42bbe602-e3b8-409d-b7bd-c1413734780b
	  Boot ID:                    405e1cee-8e35-439a-9ea3-3e33f2110153
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-dxbmj                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     26s
	  kube-system                 etcd-kubernetes-upgrade-541086                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         40s
	  kube-system                 kube-apiserver-kubernetes-upgrade-541086             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         41s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-541086    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         32s
	  kube-system                 kube-proxy-drwqq                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  kube-system                 kube-scheduler-kubernetes-upgrade-541086             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         40s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 25s                kube-proxy       
	  Normal  Starting                 3s                 kube-proxy       
	  Normal  Starting                 46s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  46s (x8 over 46s)  kubelet          Node kubernetes-upgrade-541086 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    46s (x8 over 46s)  kubelet          Node kubernetes-upgrade-541086 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     46s (x7 over 46s)  kubelet          Node kubernetes-upgrade-541086 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  46s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           27s                node-controller  Node kubernetes-upgrade-541086 event: Registered Node kubernetes-upgrade-541086 in Controller
	
	
	==> dmesg <==
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.059644] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.047643] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.958326] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.700926] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +2.458647] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.857757] systemd-fstab-generator[557]: Ignoring "noauto" option for root device
	[  +0.068288] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.091708] systemd-fstab-generator[569]: Ignoring "noauto" option for root device
	[  +0.209397] systemd-fstab-generator[583]: Ignoring "noauto" option for root device
	[  +0.139799] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.321041] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +7.513057] systemd-fstab-generator[815]: Ignoring "noauto" option for root device
	[  +0.062813] kauditd_printk_skb: 130 callbacks suppressed
	[Feb29 18:47] kauditd_printk_skb: 69 callbacks suppressed
	[ +22.167345] systemd-fstab-generator[1857]: Ignoring "noauto" option for root device
	[  +0.084639] kauditd_printk_skb: 44 callbacks suppressed
	[  +0.084188] systemd-fstab-generator[1869]: Ignoring "noauto" option for root device
	[  +0.176895] systemd-fstab-generator[1883]: Ignoring "noauto" option for root device
	[  +0.162665] systemd-fstab-generator[1895]: Ignoring "noauto" option for root device
	[  +0.259857] systemd-fstab-generator[1919]: Ignoring "noauto" option for root device
	[  +5.618229] kauditd_printk_skb: 179 callbacks suppressed
	
	
	==> etcd [67707e00b939b3568b1335bd39723c230cd328b69d7d8cd2d699c5e4c2a67264] <==
	{"level":"info","ts":"2024-02-29T18:47:39.09241Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-29T18:47:39.092425Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-29T18:47:39.092728Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"63d12f7d015473f3 switched to configuration voters=(7192582293827122163)"}
	{"level":"info","ts":"2024-02-29T18:47:39.092848Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"a66a701203d69b1d","local-member-id":"63d12f7d015473f3","added-peer-id":"63d12f7d015473f3","added-peer-peer-urls":["https://192.168.50.47:2380"]}
	{"level":"info","ts":"2024-02-29T18:47:39.093063Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"a66a701203d69b1d","local-member-id":"63d12f7d015473f3","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T18:47:39.09323Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T18:47:39.142673Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-02-29T18:47:39.14286Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"63d12f7d015473f3","initial-advertise-peer-urls":["https://192.168.50.47:2380"],"listen-peer-urls":["https://192.168.50.47:2380"],"advertise-client-urls":["https://192.168.50.47:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.47:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-02-29T18:47:39.142914Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-02-29T18:47:39.147719Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.47:2380"}
	{"level":"info","ts":"2024-02-29T18:47:39.147741Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.47:2380"}
	{"level":"info","ts":"2024-02-29T18:47:40.189225Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"63d12f7d015473f3 is starting a new election at term 2"}
	{"level":"info","ts":"2024-02-29T18:47:40.189303Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"63d12f7d015473f3 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-02-29T18:47:40.189331Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"63d12f7d015473f3 received MsgPreVoteResp from 63d12f7d015473f3 at term 2"}
	{"level":"info","ts":"2024-02-29T18:47:40.18935Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"63d12f7d015473f3 became candidate at term 3"}
	{"level":"info","ts":"2024-02-29T18:47:40.189355Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"63d12f7d015473f3 received MsgVoteResp from 63d12f7d015473f3 at term 3"}
	{"level":"info","ts":"2024-02-29T18:47:40.189363Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"63d12f7d015473f3 became leader at term 3"}
	{"level":"info","ts":"2024-02-29T18:47:40.189371Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 63d12f7d015473f3 elected leader 63d12f7d015473f3 at term 3"}
	{"level":"info","ts":"2024-02-29T18:47:40.193468Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"63d12f7d015473f3","local-member-attributes":"{Name:kubernetes-upgrade-541086 ClientURLs:[https://192.168.50.47:2379]}","request-path":"/0/members/63d12f7d015473f3/attributes","cluster-id":"a66a701203d69b1d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-29T18:47:40.193591Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T18:47:40.193732Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T18:47:40.20052Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.47:2379"}
	{"level":"info","ts":"2024-02-29T18:47:40.201151Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-29T18:47:40.204044Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-29T18:47:40.205429Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [b1a2e94f32751d4dab8cd5a1b299913bb68351ea591477ccad336644d3a57df6] <==
	{"level":"info","ts":"2024-02-29T18:47:01.158563Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"63d12f7d015473f3 became candidate at term 2"}
	{"level":"info","ts":"2024-02-29T18:47:01.158569Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"63d12f7d015473f3 received MsgVoteResp from 63d12f7d015473f3 at term 2"}
	{"level":"info","ts":"2024-02-29T18:47:01.158577Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"63d12f7d015473f3 became leader at term 2"}
	{"level":"info","ts":"2024-02-29T18:47:01.158584Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 63d12f7d015473f3 elected leader 63d12f7d015473f3 at term 2"}
	{"level":"info","ts":"2024-02-29T18:47:01.162219Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"63d12f7d015473f3","local-member-attributes":"{Name:kubernetes-upgrade-541086 ClientURLs:[https://192.168.50.47:2379]}","request-path":"/0/members/63d12f7d015473f3/attributes","cluster-id":"a66a701203d69b1d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-29T18:47:01.162276Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T18:47:01.162555Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T18:47:01.164645Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-29T18:47:01.165473Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"a66a701203d69b1d","local-member-id":"63d12f7d015473f3","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T18:47:01.172151Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T18:47:01.17224Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T18:47:01.165493Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T18:47:01.165584Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-29T18:47:01.172552Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-29T18:47:01.175822Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.47:2379"}
	{"level":"info","ts":"2024-02-29T18:47:29.910863Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-02-29T18:47:29.911167Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"kubernetes-upgrade-541086","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.47:2380"],"advertise-client-urls":["https://192.168.50.47:2379"]}
	{"level":"warn","ts":"2024-02-29T18:47:29.911303Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-02-29T18:47:29.911426Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-02-29T18:47:29.976863Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.47:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-02-29T18:47:29.977089Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.47:2379: use of closed network connection"}
	{"level":"info","ts":"2024-02-29T18:47:29.977185Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"63d12f7d015473f3","current-leader-member-id":"63d12f7d015473f3"}
	{"level":"info","ts":"2024-02-29T18:47:29.98132Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.50.47:2380"}
	{"level":"info","ts":"2024-02-29T18:47:29.98154Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.50.47:2380"}
	{"level":"info","ts":"2024-02-29T18:47:29.981616Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"kubernetes-upgrade-541086","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.47:2380"],"advertise-client-urls":["https://192.168.50.47:2379"]}
	
	
	==> kernel <==
	 18:47:46 up 1 min,  0 users,  load average: 1.86, 0.50, 0.17
	Linux kubernetes-upgrade-541086 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [54dc7ac706b462a2bacef2d0db58d03aa831304bdc62584db121cdd34ac0427c] <==
	I0229 18:47:29.927532       1 apiapproval_controller.go:198] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
	I0229 18:47:29.927548       1 nonstructuralschema_controller.go:204] Shutting down NonStructuralSchemaConditionController
	I0229 18:47:29.927568       1 controller.go:115] Shutting down OpenAPI V3 controller
	I0229 18:47:29.927586       1 apiservice_controller.go:131] Shutting down APIServiceRegistrationController
	I0229 18:47:29.927606       1 crd_finalizer.go:278] Shutting down CRDFinalizer
	I0229 18:47:29.927622       1 establishing_controller.go:87] Shutting down EstablishingController
	I0229 18:47:29.927641       1 naming_controller.go:302] Shutting down NamingConditionController
	I0229 18:47:29.927658       1 controller.go:161] Shutting down OpenAPI controller
	I0229 18:47:29.927842       1 apf_controller.go:386] Shutting down API Priority and Fairness config worker
	I0229 18:47:29.927870       1 available_controller.go:439] Shutting down AvailableConditionController
	I0229 18:47:29.927890       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I0229 18:47:29.929322       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0229 18:47:29.930702       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0229 18:47:29.930964       1 dynamic_serving_content.go:146] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0229 18:47:29.931124       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	I0229 18:47:29.931220       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0229 18:47:29.931275       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0229 18:47:29.932492       1 controller.go:159] Shutting down quota evaluator
	I0229 18:47:29.932514       1 controller.go:178] quota evaluator worker shutdown
	I0229 18:47:29.933492       1 controller.go:178] quota evaluator worker shutdown
	I0229 18:47:29.933509       1 controller.go:178] quota evaluator worker shutdown
	I0229 18:47:29.933516       1 controller.go:178] quota evaluator worker shutdown
	I0229 18:47:29.933522       1 controller.go:178] quota evaluator worker shutdown
	I0229 18:47:29.935979       1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0229 18:47:29.939308       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	
	
	==> kube-apiserver [dbbe351e9027037265ec73e35f35127650378f5d09e3a51bf3b9ecc3098e55b6] <==
	I0229 18:47:42.109862       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0229 18:47:42.109900       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0229 18:47:42.109939       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0229 18:47:42.110063       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0229 18:47:42.110270       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0229 18:47:42.112697       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0229 18:47:42.112748       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0229 18:47:42.110573       1 handler_discovery.go:412] Starting ResourceDiscoveryManager
	I0229 18:47:42.110607       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0229 18:47:42.312800       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0229 18:47:42.316619       1 aggregator.go:165] initial CRD sync complete...
	I0229 18:47:42.316661       1 autoregister_controller.go:141] Starting autoregister controller
	I0229 18:47:42.316669       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0229 18:47:42.332477       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0229 18:47:42.333642       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0229 18:47:42.359887       1 controller.go:624] quota admission added evaluator for: endpoints
	I0229 18:47:42.408410       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0229 18:47:42.408676       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0229 18:47:42.408718       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0229 18:47:42.409160       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0229 18:47:42.413455       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0229 18:47:42.414530       1 shared_informer.go:318] Caches are synced for configmaps
	I0229 18:47:42.420966       1 cache.go:39] Caches are synced for autoregister controller
	E0229 18:47:42.456675       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0229 18:47:43.112734       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	
	
	==> kube-controller-manager [24d883071673223f3d6978a8ab5c26cd34a5a2a24fdb8d08c1cd6560f86afc8e] <==
	I0229 18:47:40.096982       1 serving.go:380] Generated self-signed cert in-memory
	I0229 18:47:40.576918       1 controllermanager.go:187] "Starting" version="v1.29.0-rc.2"
	I0229 18:47:40.581099       1 controllermanager.go:189] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0229 18:47:40.582765       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0229 18:47:40.583053       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0229 18:47:40.584093       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0229 18:47:40.584276       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0229 18:47:44.224865       1 controllermanager.go:735] "Started controller" controller="serviceaccount-token-controller"
	I0229 18:47:44.225269       1 shared_informer.go:311] Waiting for caches to sync for tokens
	I0229 18:47:44.230683       1 controllermanager.go:735] "Started controller" controller="ttl-after-finished-controller"
	I0229 18:47:44.231167       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller"
	I0229 18:47:44.231205       1 shared_informer.go:311] Waiting for caches to sync for TTL after finished
	I0229 18:47:44.235930       1 controllermanager.go:735] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0229 18:47:44.236134       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller"
	I0229 18:47:44.236168       1 shared_informer.go:311] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0229 18:47:44.239423       1 controllermanager.go:735] "Started controller" controller="endpoints-controller"
	I0229 18:47:44.239677       1 endpoints_controller.go:174] "Starting endpoint controller"
	I0229 18:47:44.239712       1 shared_informer.go:311] Waiting for caches to sync for endpoint
	I0229 18:47:44.326306       1 shared_informer.go:318] Caches are synced for tokens
	
	
	==> kube-controller-manager [7aed1d60751b79345babb87c0fab30381780d0749ed91fbb22c7746e3d989184] <==
	I0229 18:47:18.144590       1 range_allocator.go:380] "Set node PodCIDR" node="kubernetes-upgrade-541086" podCIDRs=["10.244.0.0/24"]
	I0229 18:47:18.159762       1 shared_informer.go:318] Caches are synced for endpoint
	I0229 18:47:18.161080       1 shared_informer.go:318] Caches are synced for HPA
	I0229 18:47:18.164620       1 shared_informer.go:318] Caches are synced for attach detach
	I0229 18:47:18.171162       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0229 18:47:18.189249       1 shared_informer.go:318] Caches are synced for disruption
	I0229 18:47:18.200666       1 shared_informer.go:318] Caches are synced for expand
	I0229 18:47:18.216497       1 shared_informer.go:318] Caches are synced for ephemeral
	I0229 18:47:18.226108       1 shared_informer.go:318] Caches are synced for persistent volume
	I0229 18:47:18.264517       1 shared_informer.go:318] Caches are synced for deployment
	I0229 18:47:18.267591       1 shared_informer.go:318] Caches are synced for stateful set
	I0229 18:47:18.268880       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0229 18:47:18.314587       1 shared_informer.go:318] Caches are synced for daemon sets
	I0229 18:47:18.320258       1 shared_informer.go:318] Caches are synced for resource quota
	I0229 18:47:18.329625       1 shared_informer.go:318] Caches are synced for resource quota
	I0229 18:47:18.632950       1 shared_informer.go:318] Caches are synced for garbage collector
	I0229 18:47:18.633153       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0229 18:47:18.675286       1 shared_informer.go:318] Caches are synced for garbage collector
	I0229 18:47:18.876724       1 event.go:376] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-76f75df574 to 1"
	I0229 18:47:18.942788       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-drwqq"
	I0229 18:47:19.026349       1 event.go:376] "Event occurred" object="kube-system/coredns-76f75df574" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-76f75df574-dxbmj"
	I0229 18:47:19.056205       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="177.611777ms"
	I0229 18:47:19.105951       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="49.427695ms"
	I0229 18:47:19.106150       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="50.6µs"
	I0229 18:47:19.863077       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="116.103µs"
	
	
	==> kube-proxy [f1febe0a59aad4577d29c0ca64b5d5cab898066646748684808df0b3b841d4d0] <==
	I0229 18:47:40.894570       1 server_others.go:72] "Using iptables proxy"
	I0229 18:47:42.350387       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.50.47"]
	I0229 18:47:42.513129       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0229 18:47:42.513273       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0229 18:47:42.513403       1 server_others.go:168] "Using iptables Proxier"
	I0229 18:47:42.532923       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0229 18:47:42.533764       1 server.go:865] "Version info" version="v1.29.0-rc.2"
	I0229 18:47:42.534535       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0229 18:47:42.542500       1 config.go:315] "Starting node config controller"
	I0229 18:47:42.543727       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0229 18:47:42.544642       1 config.go:97] "Starting endpoint slice config controller"
	I0229 18:47:42.547558       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0229 18:47:42.547740       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0229 18:47:42.545214       1 config.go:188] "Starting service config controller"
	I0229 18:47:42.547829       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0229 18:47:42.547852       1 shared_informer.go:318] Caches are synced for service config
	I0229 18:47:42.650607       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-proxy [f4b9cbed41feddd1bdbcc0d87a938811598d892283ef304229cda03d9bef442c] <==
	I0229 18:47:19.730734       1 server_others.go:72] "Using iptables proxy"
	I0229 18:47:19.755567       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.50.47"]
	I0229 18:47:19.925512       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0229 18:47:19.925561       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0229 18:47:19.925574       1 server_others.go:168] "Using iptables Proxier"
	I0229 18:47:19.931929       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0229 18:47:19.932616       1 server.go:865] "Version info" version="v1.29.0-rc.2"
	I0229 18:47:19.932757       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0229 18:47:19.933746       1 config.go:188] "Starting service config controller"
	I0229 18:47:19.933821       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0229 18:47:19.933854       1 config.go:97] "Starting endpoint slice config controller"
	I0229 18:47:19.933871       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0229 18:47:19.934449       1 config.go:315] "Starting node config controller"
	I0229 18:47:19.936212       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0229 18:47:20.033979       1 shared_informer.go:318] Caches are synced for service config
	I0229 18:47:20.034061       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0229 18:47:20.036664       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [752b1c671d063a5651136fee766116191348aff8afaf6199b561f683bbfc9c66] <==
	I0229 18:47:40.811429       1 serving.go:380] Generated self-signed cert in-memory
	W0229 18:47:42.207850       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0229 18:47:42.208150       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0229 18:47:42.208273       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0229 18:47:42.208300       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0229 18:47:42.283539       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.0-rc.2"
	I0229 18:47:42.283618       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0229 18:47:42.286343       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0229 18:47:42.286865       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0229 18:47:42.286978       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0229 18:47:42.287099       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	W0229 18:47:42.311631       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0229 18:47:42.311773       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0229 18:47:42.327867       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0229 18:47:42.328042       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0229 18:47:42.328239       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0229 18:47:42.328397       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0229 18:47:42.328706       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0229 18:47:42.328755       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0229 18:47:43.887542       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [d1bcc193b5d4fd332626af1ef37ccc1145a0fd557753eca9e81f851c8dfd1086] <==
	W0229 18:47:03.953606       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0229 18:47:03.953664       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0229 18:47:03.967912       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0229 18:47:03.967968       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0229 18:47:04.022149       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0229 18:47:04.022227       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0229 18:47:04.024173       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0229 18:47:04.024241       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0229 18:47:04.059547       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0229 18:47:04.059648       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0229 18:47:04.069933       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0229 18:47:04.070064       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0229 18:47:04.084681       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0229 18:47:04.085077       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0229 18:47:04.125638       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0229 18:47:04.126042       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0229 18:47:04.360412       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0229 18:47:04.360484       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0229 18:47:04.420595       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0229 18:47:04.420649       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0229 18:47:07.055775       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0229 18:47:29.922407       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0229 18:47:29.922583       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0229 18:47:29.922874       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0229 18:47:29.931847       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Feb 29 18:47:37 kubernetes-upgrade-541086 kubelet[822]: I0229 18:47:37.942441     822 status_manager.go:853] "Failed to get status for pod" podUID="8d434d718961abfa626899f7db129cb1" pod="kube-system/kube-apiserver-kubernetes-upgrade-541086" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-kubernetes-upgrade-541086\": dial tcp 192.168.50.47:8443: connect: connection refused"
	Feb 29 18:47:37 kubernetes-upgrade-541086 kubelet[822]: I0229 18:47:37.943410     822 status_manager.go:853] "Failed to get status for pod" podUID="6d98d46c0f0c509f37247bcdb78ec6e1" pod="kube-system/kube-controller-manager-kubernetes-upgrade-541086" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-kubernetes-upgrade-541086\": dial tcp 192.168.50.47:8443: connect: connection refused"
	Feb 29 18:47:38 kubernetes-upgrade-541086 kubelet[822]: I0229 18:47:38.946241     822 status_manager.go:853] "Failed to get status for pod" podUID="1ae3e264-1db8-4a9f-a9a4-84aed40d5d21" pod="kube-system/coredns-76f75df574-dxbmj" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-dxbmj\": dial tcp 192.168.50.47:8443: connect: connection refused"
	Feb 29 18:47:38 kubernetes-upgrade-541086 kubelet[822]: I0229 18:47:38.948747     822 status_manager.go:853] "Failed to get status for pod" podUID="8d434d718961abfa626899f7db129cb1" pod="kube-system/kube-apiserver-kubernetes-upgrade-541086" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-kubernetes-upgrade-541086\": dial tcp 192.168.50.47:8443: connect: connection refused"
	Feb 29 18:47:38 kubernetes-upgrade-541086 kubelet[822]: I0229 18:47:38.951529     822 status_manager.go:853] "Failed to get status for pod" podUID="6d98d46c0f0c509f37247bcdb78ec6e1" pod="kube-system/kube-controller-manager-kubernetes-upgrade-541086" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-kubernetes-upgrade-541086\": dial tcp 192.168.50.47:8443: connect: connection refused"
	Feb 29 18:47:38 kubernetes-upgrade-541086 kubelet[822]: I0229 18:47:38.952939     822 status_manager.go:853] "Failed to get status for pod" podUID="3b996d4316c4169149dd401dcf04722a" pod="kube-system/kube-scheduler-kubernetes-upgrade-541086" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-kubernetes-upgrade-541086\": dial tcp 192.168.50.47:8443: connect: connection refused"
	Feb 29 18:47:38 kubernetes-upgrade-541086 kubelet[822]: I0229 18:47:38.953985     822 status_manager.go:853] "Failed to get status for pod" podUID="785aad4c6acf5e0010f0060d25a0b1bd" pod="kube-system/etcd-kubernetes-upgrade-541086" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-kubernetes-upgrade-541086\": dial tcp 192.168.50.47:8443: connect: connection refused"
	Feb 29 18:47:38 kubernetes-upgrade-541086 kubelet[822]: I0229 18:47:38.956317     822 status_manager.go:853] "Failed to get status for pod" podUID="5607dd4d-b5fc-4397-9474-d2303e89dd7e" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.50.47:8443: connect: connection refused"
	Feb 29 18:47:38 kubernetes-upgrade-541086 kubelet[822]: I0229 18:47:38.957270     822 status_manager.go:853] "Failed to get status for pod" podUID="f75b3ce5-00bd-442d-ba5b-b3503a1199e0" pod="kube-system/kube-proxy-drwqq" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-proxy-drwqq\": dial tcp 192.168.50.47:8443: connect: connection refused"
	Feb 29 18:47:38 kubernetes-upgrade-541086 kubelet[822]: I0229 18:47:38.973086     822 status_manager.go:853] "Failed to get status for pod" podUID="8d434d718961abfa626899f7db129cb1" pod="kube-system/kube-apiserver-kubernetes-upgrade-541086" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-kubernetes-upgrade-541086\": dial tcp 192.168.50.47:8443: connect: connection refused"
	Feb 29 18:47:39 kubernetes-upgrade-541086 kubelet[822]: I0229 18:47:39.022602     822 status_manager.go:853] "Failed to get status for pod" podUID="6d98d46c0f0c509f37247bcdb78ec6e1" pod="kube-system/kube-controller-manager-kubernetes-upgrade-541086" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-kubernetes-upgrade-541086\": dial tcp 192.168.50.47:8443: connect: connection refused"
	Feb 29 18:47:39 kubernetes-upgrade-541086 kubelet[822]: I0229 18:47:39.032822     822 status_manager.go:853] "Failed to get status for pod" podUID="3b996d4316c4169149dd401dcf04722a" pod="kube-system/kube-scheduler-kubernetes-upgrade-541086" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-kubernetes-upgrade-541086\": dial tcp 192.168.50.47:8443: connect: connection refused"
	Feb 29 18:47:39 kubernetes-upgrade-541086 kubelet[822]: I0229 18:47:39.035301     822 status_manager.go:853] "Failed to get status for pod" podUID="785aad4c6acf5e0010f0060d25a0b1bd" pod="kube-system/etcd-kubernetes-upgrade-541086" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-kubernetes-upgrade-541086\": dial tcp 192.168.50.47:8443: connect: connection refused"
	Feb 29 18:47:39 kubernetes-upgrade-541086 kubelet[822]: I0229 18:47:39.036322     822 status_manager.go:853] "Failed to get status for pod" podUID="5607dd4d-b5fc-4397-9474-d2303e89dd7e" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.50.47:8443: connect: connection refused"
	Feb 29 18:47:39 kubernetes-upgrade-541086 kubelet[822]: I0229 18:47:39.038251     822 status_manager.go:853] "Failed to get status for pod" podUID="f75b3ce5-00bd-442d-ba5b-b3503a1199e0" pod="kube-system/kube-proxy-drwqq" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-proxy-drwqq\": dial tcp 192.168.50.47:8443: connect: connection refused"
	Feb 29 18:47:39 kubernetes-upgrade-541086 kubelet[822]: I0229 18:47:39.039460     822 status_manager.go:853] "Failed to get status for pod" podUID="1ae3e264-1db8-4a9f-a9a4-84aed40d5d21" pod="kube-system/coredns-76f75df574-dxbmj" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-dxbmj\": dial tcp 192.168.50.47:8443: connect: connection refused"
	Feb 29 18:47:39 kubernetes-upgrade-541086 kubelet[822]: I0229 18:47:39.041603     822 status_manager.go:853] "Failed to get status for pod" podUID="785aad4c6acf5e0010f0060d25a0b1bd" pod="kube-system/etcd-kubernetes-upgrade-541086" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-kubernetes-upgrade-541086\": dial tcp 192.168.50.47:8443: connect: connection refused"
	Feb 29 18:47:39 kubernetes-upgrade-541086 kubelet[822]: I0229 18:47:39.042613     822 status_manager.go:853] "Failed to get status for pod" podUID="5607dd4d-b5fc-4397-9474-d2303e89dd7e" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.50.47:8443: connect: connection refused"
	Feb 29 18:47:39 kubernetes-upgrade-541086 kubelet[822]: I0229 18:47:39.066386     822 status_manager.go:853] "Failed to get status for pod" podUID="f75b3ce5-00bd-442d-ba5b-b3503a1199e0" pod="kube-system/kube-proxy-drwqq" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-proxy-drwqq\": dial tcp 192.168.50.47:8443: connect: connection refused"
	Feb 29 18:47:39 kubernetes-upgrade-541086 kubelet[822]: I0229 18:47:39.068492     822 status_manager.go:853] "Failed to get status for pod" podUID="1ae3e264-1db8-4a9f-a9a4-84aed40d5d21" pod="kube-system/coredns-76f75df574-dxbmj" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-dxbmj\": dial tcp 192.168.50.47:8443: connect: connection refused"
	Feb 29 18:47:39 kubernetes-upgrade-541086 kubelet[822]: I0229 18:47:39.070489     822 status_manager.go:853] "Failed to get status for pod" podUID="8d434d718961abfa626899f7db129cb1" pod="kube-system/kube-apiserver-kubernetes-upgrade-541086" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-kubernetes-upgrade-541086\": dial tcp 192.168.50.47:8443: connect: connection refused"
	Feb 29 18:47:39 kubernetes-upgrade-541086 kubelet[822]: I0229 18:47:39.073516     822 status_manager.go:853] "Failed to get status for pod" podUID="6d98d46c0f0c509f37247bcdb78ec6e1" pod="kube-system/kube-controller-manager-kubernetes-upgrade-541086" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-kubernetes-upgrade-541086\": dial tcp 192.168.50.47:8443: connect: connection refused"
	Feb 29 18:47:39 kubernetes-upgrade-541086 kubelet[822]: I0229 18:47:39.076545     822 status_manager.go:853] "Failed to get status for pod" podUID="3b996d4316c4169149dd401dcf04722a" pod="kube-system/kube-scheduler-kubernetes-upgrade-541086" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-kubernetes-upgrade-541086\": dial tcp 192.168.50.47:8443: connect: connection refused"
	Feb 29 18:47:42 kubernetes-upgrade-541086 kubelet[822]: E0229 18:47:42.193790     822 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	Feb 29 18:47:42 kubernetes-upgrade-541086 kubelet[822]: E0229 18:47:42.194411     822 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	
	
	==> storage-provisioner [4e974c883c4fc2273157b59e6441898a8136383a5a1f61637e41d55faa5c37e0] <==
	I0229 18:47:19.417126       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	
	
	==> storage-provisioner [a1a2e44b58141c1578b00b111d6cc91bb603cca576fa3093a17ec8edd2661fbc] <==
	I0229 18:47:40.519485       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0229 18:47:42.352450       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0229 18:47:42.352941       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0229 18:47:42.371448       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0229 18:47:42.372159       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"30484ace-662c-4077-9c0a-3f6fa6b71ae6", APIVersion:"v1", ResourceVersion:"378", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-541086_8dbe4e43-7e98-44b9-86f9-ccee805bca7f became leader
	I0229 18:47:42.372795       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-541086_8dbe4e43-7e98-44b9-86f9-ccee805bca7f!
	I0229 18:47:42.474870       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-541086_8dbe4e43-7e98-44b9-86f9-ccee805bca7f!
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 18:47:44.878790   44931 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18259-6428/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-541086 -n kubernetes-upgrade-541086
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-541086 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-541086" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-541086
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-541086: (1.16851081s)
--- FAIL: TestKubernetesUpgrade (418.76s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (270.84s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-631080 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-631080 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: exit status 109 (4m30.536456275s)

                                                
                                                
-- stdout --
	* [old-k8s-version-631080] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18259
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18259-6428/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6428/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting control plane node old-k8s-version-631080 in cluster old-k8s-version-631080
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.16.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 18:46:54.369989   44399 out.go:291] Setting OutFile to fd 1 ...
	I0229 18:46:54.370098   44399 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:46:54.370109   44399 out.go:304] Setting ErrFile to fd 2...
	I0229 18:46:54.370115   44399 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:46:54.370313   44399 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6428/.minikube/bin
	I0229 18:46:54.370918   44399 out.go:298] Setting JSON to false
	I0229 18:46:54.371942   44399 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5359,"bootTime":1709227056,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 18:46:54.372009   44399 start.go:139] virtualization: kvm guest
	I0229 18:46:54.374309   44399 out.go:177] * [old-k8s-version-631080] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 18:46:54.375787   44399 out.go:177]   - MINIKUBE_LOCATION=18259
	I0229 18:46:54.375785   44399 notify.go:220] Checking for updates...
	I0229 18:46:54.378769   44399 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 18:46:54.380080   44399 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18259-6428/kubeconfig
	I0229 18:46:54.381355   44399 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6428/.minikube
	I0229 18:46:54.382856   44399 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 18:46:54.384302   44399 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 18:46:54.385962   44399 config.go:182] Loaded profile config "cert-expiration-393248": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 18:46:54.386065   44399 config.go:182] Loaded profile config "kubernetes-upgrade-541086": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0229 18:46:54.386175   44399 config.go:182] Loaded profile config "pause-848791": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 18:46:54.386270   44399 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 18:46:54.431097   44399 out.go:177] * Using the kvm2 driver based on user configuration
	I0229 18:46:54.432452   44399 start.go:299] selected driver: kvm2
	I0229 18:46:54.432470   44399 start.go:903] validating driver "kvm2" against <nil>
	I0229 18:46:54.432481   44399 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 18:46:54.433194   44399 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:46:54.433285   44399 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18259-6428/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 18:46:54.449032   44399 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 18:46:54.449077   44399 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 18:46:54.449279   44399 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0229 18:46:54.449345   44399 cni.go:84] Creating CNI manager for ""
	I0229 18:46:54.449357   44399 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 18:46:54.449369   44399 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0229 18:46:54.449377   44399 start_flags.go:323] config:
	{Name:old-k8s-version-631080 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-631080 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:46:54.449502   44399 iso.go:125] acquiring lock: {Name:mk4b55cee5696fff78a1389276e4e011ad655e56 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:46:54.451402   44399 out.go:177] * Starting control plane node old-k8s-version-631080 in cluster old-k8s-version-631080
	I0229 18:46:54.452953   44399 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0229 18:46:54.452989   44399 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0229 18:46:54.452997   44399 cache.go:56] Caching tarball of preloaded images
	I0229 18:46:54.453097   44399 preload.go:174] Found /home/jenkins/minikube-integration/18259-6428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0229 18:46:54.453109   44399 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I0229 18:46:54.453247   44399 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/config.json ...
	I0229 18:46:54.453275   44399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/config.json: {Name:mk693df3a443eed5d36e58a0dbdf9df907f81cb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:46:54.453440   44399 start.go:365] acquiring machines lock for old-k8s-version-631080: {Name:mk08b242c6b6b7cd4a6843a7d3821a8b435540dd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 18:46:54.453494   44399 start.go:369] acquired machines lock for "old-k8s-version-631080" in 31.039µs
	I0229 18:46:54.453520   44399 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-631080 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-631080 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0229 18:46:54.453594   44399 start.go:125] createHost starting for "" (driver="kvm2")
	I0229 18:46:54.455127   44399 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0229 18:46:54.455269   44399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 18:46:54.455312   44399 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:46:54.469961   44399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46413
	I0229 18:46:54.470376   44399 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:46:54.470941   44399 main.go:141] libmachine: Using API Version  1
	I0229 18:46:54.470967   44399 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:46:54.471369   44399 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:46:54.471599   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetMachineName
	I0229 18:46:54.471784   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .DriverName
	I0229 18:46:54.471957   44399 start.go:159] libmachine.API.Create for "old-k8s-version-631080" (driver="kvm2")
	I0229 18:46:54.471986   44399 client.go:168] LocalClient.Create starting
	I0229 18:46:54.472020   44399 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem
	I0229 18:46:54.472059   44399 main.go:141] libmachine: Decoding PEM data...
	I0229 18:46:54.472084   44399 main.go:141] libmachine: Parsing certificate...
	I0229 18:46:54.472154   44399 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem
	I0229 18:46:54.472181   44399 main.go:141] libmachine: Decoding PEM data...
	I0229 18:46:54.472199   44399 main.go:141] libmachine: Parsing certificate...
	I0229 18:46:54.472221   44399 main.go:141] libmachine: Running pre-create checks...
	I0229 18:46:54.472241   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .PreCreateCheck
	I0229 18:46:54.472643   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetConfigRaw
	I0229 18:46:54.473044   44399 main.go:141] libmachine: Creating machine...
	I0229 18:46:54.473061   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .Create
	I0229 18:46:54.473199   44399 main.go:141] libmachine: (old-k8s-version-631080) Creating KVM machine...
	I0229 18:46:54.474775   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | found existing default KVM network
	I0229 18:46:54.476306   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:46:54.476129   44421 network.go:212] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:26:45:ad} reservation:<nil>}
	I0229 18:46:54.477088   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:46:54.476986   44421 network.go:212] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:52:29:68} reservation:<nil>}
	I0229 18:46:54.479353   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:46:54.479218   44421 network.go:210] skipping subnet 192.168.61.0/24 that is reserved: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0229 18:46:54.480176   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:46:54.480076   44421 network.go:212] skipping subnet 192.168.72.0/24 that is taken: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.72.1 IfaceMTU:1500 IfaceMAC:52:54:00:84:23:4a} reservation:<nil>}
	I0229 18:46:54.481459   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:46:54.481348   44421 network.go:207] using free private subnet 192.168.83.0/24: &{IP:192.168.83.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.83.0/24 Gateway:192.168.83.1 ClientMin:192.168.83.2 ClientMax:192.168.83.254 Broadcast:192.168.83.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015540}
	I0229 18:46:54.486695   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | trying to create private KVM network mk-old-k8s-version-631080 192.168.83.0/24...
	I0229 18:46:54.563569   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | private KVM network mk-old-k8s-version-631080 192.168.83.0/24 created
	I0229 18:46:54.563614   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:46:54.563536   44421 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18259-6428/.minikube
	I0229 18:46:54.563632   44399 main.go:141] libmachine: (old-k8s-version-631080) Setting up store path in /home/jenkins/minikube-integration/18259-6428/.minikube/machines/old-k8s-version-631080 ...
	I0229 18:46:54.563661   44399 main.go:141] libmachine: (old-k8s-version-631080) Building disk image from file:///home/jenkins/minikube-integration/18259-6428/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso
	I0229 18:46:54.563773   44399 main.go:141] libmachine: (old-k8s-version-631080) Downloading /home/jenkins/minikube-integration/18259-6428/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18259-6428/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0229 18:46:54.796669   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:46:54.796532   44421 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/old-k8s-version-631080/id_rsa...
	I0229 18:46:54.901732   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:46:54.901616   44421 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/old-k8s-version-631080/old-k8s-version-631080.rawdisk...
	I0229 18:46:54.901761   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | Writing magic tar header
	I0229 18:46:54.901797   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | Writing SSH key tar header
	I0229 18:46:54.901813   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:46:54.901737   44421 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18259-6428/.minikube/machines/old-k8s-version-631080 ...
	I0229 18:46:54.901836   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/old-k8s-version-631080
	I0229 18:46:54.901864   44399 main.go:141] libmachine: (old-k8s-version-631080) Setting executable bit set on /home/jenkins/minikube-integration/18259-6428/.minikube/machines/old-k8s-version-631080 (perms=drwx------)
	I0229 18:46:54.901884   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6428/.minikube/machines
	I0229 18:46:54.901899   44399 main.go:141] libmachine: (old-k8s-version-631080) Setting executable bit set on /home/jenkins/minikube-integration/18259-6428/.minikube/machines (perms=drwxr-xr-x)
	I0229 18:46:54.901918   44399 main.go:141] libmachine: (old-k8s-version-631080) Setting executable bit set on /home/jenkins/minikube-integration/18259-6428/.minikube (perms=drwxr-xr-x)
	I0229 18:46:54.901930   44399 main.go:141] libmachine: (old-k8s-version-631080) Setting executable bit set on /home/jenkins/minikube-integration/18259-6428 (perms=drwxrwxr-x)
	I0229 18:46:54.901944   44399 main.go:141] libmachine: (old-k8s-version-631080) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0229 18:46:54.901962   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6428/.minikube
	I0229 18:46:54.901975   44399 main.go:141] libmachine: (old-k8s-version-631080) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0229 18:46:54.901991   44399 main.go:141] libmachine: (old-k8s-version-631080) Creating domain...
	I0229 18:46:54.902009   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6428
	I0229 18:46:54.902023   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0229 18:46:54.902035   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | Checking permissions on dir: /home/jenkins
	I0229 18:46:54.902047   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | Checking permissions on dir: /home
	I0229 18:46:54.902059   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | Skipping /home - not owner
	I0229 18:46:54.903138   44399 main.go:141] libmachine: (old-k8s-version-631080) define libvirt domain using xml: 
	I0229 18:46:54.903167   44399 main.go:141] libmachine: (old-k8s-version-631080) <domain type='kvm'>
	I0229 18:46:54.903179   44399 main.go:141] libmachine: (old-k8s-version-631080)   <name>old-k8s-version-631080</name>
	I0229 18:46:54.903192   44399 main.go:141] libmachine: (old-k8s-version-631080)   <memory unit='MiB'>2200</memory>
	I0229 18:46:54.903201   44399 main.go:141] libmachine: (old-k8s-version-631080)   <vcpu>2</vcpu>
	I0229 18:46:54.903209   44399 main.go:141] libmachine: (old-k8s-version-631080)   <features>
	I0229 18:46:54.903217   44399 main.go:141] libmachine: (old-k8s-version-631080)     <acpi/>
	I0229 18:46:54.903224   44399 main.go:141] libmachine: (old-k8s-version-631080)     <apic/>
	I0229 18:46:54.903230   44399 main.go:141] libmachine: (old-k8s-version-631080)     <pae/>
	I0229 18:46:54.903237   44399 main.go:141] libmachine: (old-k8s-version-631080)     
	I0229 18:46:54.903242   44399 main.go:141] libmachine: (old-k8s-version-631080)   </features>
	I0229 18:46:54.903247   44399 main.go:141] libmachine: (old-k8s-version-631080)   <cpu mode='host-passthrough'>
	I0229 18:46:54.903252   44399 main.go:141] libmachine: (old-k8s-version-631080)   
	I0229 18:46:54.903259   44399 main.go:141] libmachine: (old-k8s-version-631080)   </cpu>
	I0229 18:46:54.903290   44399 main.go:141] libmachine: (old-k8s-version-631080)   <os>
	I0229 18:46:54.903315   44399 main.go:141] libmachine: (old-k8s-version-631080)     <type>hvm</type>
	I0229 18:46:54.903324   44399 main.go:141] libmachine: (old-k8s-version-631080)     <boot dev='cdrom'/>
	I0229 18:46:54.903332   44399 main.go:141] libmachine: (old-k8s-version-631080)     <boot dev='hd'/>
	I0229 18:46:54.903348   44399 main.go:141] libmachine: (old-k8s-version-631080)     <bootmenu enable='no'/>
	I0229 18:46:54.903375   44399 main.go:141] libmachine: (old-k8s-version-631080)   </os>
	I0229 18:46:54.903393   44399 main.go:141] libmachine: (old-k8s-version-631080)   <devices>
	I0229 18:46:54.903409   44399 main.go:141] libmachine: (old-k8s-version-631080)     <disk type='file' device='cdrom'>
	I0229 18:46:54.903427   44399 main.go:141] libmachine: (old-k8s-version-631080)       <source file='/home/jenkins/minikube-integration/18259-6428/.minikube/machines/old-k8s-version-631080/boot2docker.iso'/>
	I0229 18:46:54.903440   44399 main.go:141] libmachine: (old-k8s-version-631080)       <target dev='hdc' bus='scsi'/>
	I0229 18:46:54.903453   44399 main.go:141] libmachine: (old-k8s-version-631080)       <readonly/>
	I0229 18:46:54.903464   44399 main.go:141] libmachine: (old-k8s-version-631080)     </disk>
	I0229 18:46:54.903511   44399 main.go:141] libmachine: (old-k8s-version-631080)     <disk type='file' device='disk'>
	I0229 18:46:54.903540   44399 main.go:141] libmachine: (old-k8s-version-631080)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0229 18:46:54.903578   44399 main.go:141] libmachine: (old-k8s-version-631080)       <source file='/home/jenkins/minikube-integration/18259-6428/.minikube/machines/old-k8s-version-631080/old-k8s-version-631080.rawdisk'/>
	I0229 18:46:54.903590   44399 main.go:141] libmachine: (old-k8s-version-631080)       <target dev='hda' bus='virtio'/>
	I0229 18:46:54.903601   44399 main.go:141] libmachine: (old-k8s-version-631080)     </disk>
	I0229 18:46:54.903615   44399 main.go:141] libmachine: (old-k8s-version-631080)     <interface type='network'>
	I0229 18:46:54.903630   44399 main.go:141] libmachine: (old-k8s-version-631080)       <source network='mk-old-k8s-version-631080'/>
	I0229 18:46:54.903640   44399 main.go:141] libmachine: (old-k8s-version-631080)       <model type='virtio'/>
	I0229 18:46:54.903649   44399 main.go:141] libmachine: (old-k8s-version-631080)     </interface>
	I0229 18:46:54.903671   44399 main.go:141] libmachine: (old-k8s-version-631080)     <interface type='network'>
	I0229 18:46:54.903683   44399 main.go:141] libmachine: (old-k8s-version-631080)       <source network='default'/>
	I0229 18:46:54.903691   44399 main.go:141] libmachine: (old-k8s-version-631080)       <model type='virtio'/>
	I0229 18:46:54.903703   44399 main.go:141] libmachine: (old-k8s-version-631080)     </interface>
	I0229 18:46:54.903713   44399 main.go:141] libmachine: (old-k8s-version-631080)     <serial type='pty'>
	I0229 18:46:54.903723   44399 main.go:141] libmachine: (old-k8s-version-631080)       <target port='0'/>
	I0229 18:46:54.903734   44399 main.go:141] libmachine: (old-k8s-version-631080)     </serial>
	I0229 18:46:54.903768   44399 main.go:141] libmachine: (old-k8s-version-631080)     <console type='pty'>
	I0229 18:46:54.903790   44399 main.go:141] libmachine: (old-k8s-version-631080)       <target type='serial' port='0'/>
	I0229 18:46:54.903802   44399 main.go:141] libmachine: (old-k8s-version-631080)     </console>
	I0229 18:46:54.903823   44399 main.go:141] libmachine: (old-k8s-version-631080)     <rng model='virtio'>
	I0229 18:46:54.903837   44399 main.go:141] libmachine: (old-k8s-version-631080)       <backend model='random'>/dev/random</backend>
	I0229 18:46:54.903844   44399 main.go:141] libmachine: (old-k8s-version-631080)     </rng>
	I0229 18:46:54.903874   44399 main.go:141] libmachine: (old-k8s-version-631080)     
	I0229 18:46:54.903896   44399 main.go:141] libmachine: (old-k8s-version-631080)     
	I0229 18:46:54.903909   44399 main.go:141] libmachine: (old-k8s-version-631080)   </devices>
	I0229 18:46:54.903919   44399 main.go:141] libmachine: (old-k8s-version-631080) </domain>
	I0229 18:46:54.903934   44399 main.go:141] libmachine: (old-k8s-version-631080) 
	I0229 18:46:54.908089   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:45:2c:7a in network default
	I0229 18:46:54.908835   44399 main.go:141] libmachine: (old-k8s-version-631080) Ensuring networks are active...
	I0229 18:46:54.908863   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:46:54.909555   44399 main.go:141] libmachine: (old-k8s-version-631080) Ensuring network default is active
	I0229 18:46:54.909933   44399 main.go:141] libmachine: (old-k8s-version-631080) Ensuring network mk-old-k8s-version-631080 is active
	I0229 18:46:54.910529   44399 main.go:141] libmachine: (old-k8s-version-631080) Getting domain xml...
	I0229 18:46:54.911447   44399 main.go:141] libmachine: (old-k8s-version-631080) Creating domain...
	I0229 18:46:56.226634   44399 main.go:141] libmachine: (old-k8s-version-631080) Waiting to get IP...
	I0229 18:46:56.227623   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:46:56.228185   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:46:56.228226   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:46:56.228176   44421 retry.go:31] will retry after 276.827254ms: waiting for machine to come up
	I0229 18:46:56.506898   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:46:56.507677   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:46:56.507720   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:46:56.507666   44421 retry.go:31] will retry after 320.605046ms: waiting for machine to come up
	I0229 18:46:56.830182   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:46:56.830698   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:46:56.830737   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:46:56.830662   44421 retry.go:31] will retry after 403.574322ms: waiting for machine to come up
	I0229 18:46:57.236342   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:46:57.236789   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:46:57.236814   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:46:57.236746   44421 retry.go:31] will retry after 382.417907ms: waiting for machine to come up
	I0229 18:46:57.621375   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:46:57.621893   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:46:57.621926   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:46:57.621845   44421 retry.go:31] will retry after 591.038111ms: waiting for machine to come up
	I0229 18:46:58.214069   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:46:58.214557   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:46:58.214589   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:46:58.214520   44421 retry.go:31] will retry after 636.138059ms: waiting for machine to come up
	I0229 18:46:58.851985   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:46:58.852441   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:46:58.852466   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:46:58.852388   44421 retry.go:31] will retry after 1.145847764s: waiting for machine to come up
	I0229 18:47:00.000454   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:00.000992   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:47:00.001059   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:47:00.000970   44421 retry.go:31] will retry after 1.140987822s: waiting for machine to come up
	I0229 18:47:01.143497   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:01.144021   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:47:01.144055   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:47:01.143974   44421 retry.go:31] will retry after 1.763492491s: waiting for machine to come up
	I0229 18:47:02.908596   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:02.909195   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:47:02.909224   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:47:02.909149   44421 retry.go:31] will retry after 2.156812225s: waiting for machine to come up
	I0229 18:47:05.068142   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:05.068756   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:47:05.068790   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:47:05.068705   44421 retry.go:31] will retry after 2.541472609s: waiting for machine to come up
	I0229 18:47:07.612579   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:07.613127   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:47:07.613150   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:47:07.613071   44421 retry.go:31] will retry after 2.349373813s: waiting for machine to come up
	I0229 18:47:09.963760   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:09.964188   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:47:09.964220   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:47:09.964150   44421 retry.go:31] will retry after 3.751562898s: waiting for machine to come up
	I0229 18:47:13.716793   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:13.717271   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:47:13.717296   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:47:13.717225   44421 retry.go:31] will retry after 4.503795972s: waiting for machine to come up
	I0229 18:47:18.224043   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:18.224566   44399 main.go:141] libmachine: (old-k8s-version-631080) Found IP for machine: 192.168.83.214
	I0229 18:47:18.224591   44399 main.go:141] libmachine: (old-k8s-version-631080) Reserving static IP address...
	I0229 18:47:18.224626   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has current primary IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:18.224908   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-631080", mac: "52:54:00:1b:b2:7e", ip: "192.168.83.214"} in network mk-old-k8s-version-631080
	I0229 18:47:18.299959   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | Getting to WaitForSSH function...
	I0229 18:47:18.299985   44399 main.go:141] libmachine: (old-k8s-version-631080) Reserved static IP address: 192.168.83.214
	I0229 18:47:18.299997   44399 main.go:141] libmachine: (old-k8s-version-631080) Waiting for SSH to be available...
	I0229 18:47:18.302466   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:18.302909   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:47:10 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:minikube Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:47:18.302938   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:18.303174   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | Using SSH client type: external
	I0229 18:47:18.303195   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | Using SSH private key: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/old-k8s-version-631080/id_rsa (-rw-------)
	I0229 18:47:18.303223   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.214 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18259-6428/.minikube/machines/old-k8s-version-631080/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 18:47:18.303239   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | About to run SSH command:
	I0229 18:47:18.303252   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | exit 0
	I0229 18:47:18.435600   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | SSH cmd err, output: <nil>: 
	I0229 18:47:18.435889   44399 main.go:141] libmachine: (old-k8s-version-631080) KVM machine creation complete!
	I0229 18:47:18.436301   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetConfigRaw
	I0229 18:47:18.436823   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .DriverName
	I0229 18:47:18.437032   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .DriverName
	I0229 18:47:18.437207   44399 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0229 18:47:18.437223   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetState
	I0229 18:47:18.438569   44399 main.go:141] libmachine: Detecting operating system of created instance...
	I0229 18:47:18.438585   44399 main.go:141] libmachine: Waiting for SSH to be available...
	I0229 18:47:18.438592   44399 main.go:141] libmachine: Getting to WaitForSSH function...
	I0229 18:47:18.438600   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHHostname
	I0229 18:47:18.441240   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:18.441690   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:47:10 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:47:18.441713   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:18.441907   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHPort
	I0229 18:47:18.442110   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:47:18.442258   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:47:18.442413   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHUsername
	I0229 18:47:18.442570   44399 main.go:141] libmachine: Using SSH client type: native
	I0229 18:47:18.442821   44399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.83.214 22 <nil> <nil>}
	I0229 18:47:18.442838   44399 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0229 18:47:18.559059   44399 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 18:47:18.559084   44399 main.go:141] libmachine: Detecting the provisioner...
	I0229 18:47:18.559092   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHHostname
	I0229 18:47:18.562247   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:18.562660   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:47:10 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:47:18.562681   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:18.562845   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHPort
	I0229 18:47:18.563079   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:47:18.563293   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:47:18.563479   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHUsername
	I0229 18:47:18.563662   44399 main.go:141] libmachine: Using SSH client type: native
	I0229 18:47:18.563827   44399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.83.214 22 <nil> <nil>}
	I0229 18:47:18.563839   44399 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0229 18:47:18.684893   44399 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0229 18:47:18.685033   44399 main.go:141] libmachine: found compatible host: buildroot
	I0229 18:47:18.685088   44399 main.go:141] libmachine: Provisioning with buildroot...
	I0229 18:47:18.685106   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetMachineName
	I0229 18:47:18.685329   44399 buildroot.go:166] provisioning hostname "old-k8s-version-631080"
	I0229 18:47:18.685364   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetMachineName
	I0229 18:47:18.685505   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHHostname
	I0229 18:47:18.688396   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:18.688727   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:47:10 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:47:18.688757   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:18.688831   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHPort
	I0229 18:47:18.689033   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:47:18.689194   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:47:18.689320   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHUsername
	I0229 18:47:18.689520   44399 main.go:141] libmachine: Using SSH client type: native
	I0229 18:47:18.689679   44399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.83.214 22 <nil> <nil>}
	I0229 18:47:18.689690   44399 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-631080 && echo "old-k8s-version-631080" | sudo tee /etc/hostname
	I0229 18:47:18.826596   44399 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-631080
	
	I0229 18:47:18.826630   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHHostname
	I0229 18:47:18.829407   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:18.829838   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:47:10 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:47:18.829880   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:18.830077   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHPort
	I0229 18:47:18.830293   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:47:18.830513   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:47:18.830680   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHUsername
	I0229 18:47:18.830942   44399 main.go:141] libmachine: Using SSH client type: native
	I0229 18:47:18.831174   44399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.83.214 22 <nil> <nil>}
	I0229 18:47:18.831198   44399 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-631080' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-631080/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-631080' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 18:47:18.953906   44399 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 18:47:18.953936   44399 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18259-6428/.minikube CaCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18259-6428/.minikube}
	I0229 18:47:18.953994   44399 buildroot.go:174] setting up certificates
	I0229 18:47:18.954012   44399 provision.go:83] configureAuth start
	I0229 18:47:18.954031   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetMachineName
	I0229 18:47:18.954333   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetIP
	I0229 18:47:18.957242   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:18.957560   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:47:10 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:47:18.957588   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:18.957716   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHHostname
	I0229 18:47:18.960304   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:18.960631   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:47:10 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:47:18.960659   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:18.960797   44399 provision.go:138] copyHostCerts
	I0229 18:47:18.960845   44399 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem, removing ...
	I0229 18:47:18.960861   44399 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem
	I0229 18:47:18.960903   44399 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem (1082 bytes)
	I0229 18:47:18.960994   44399 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem, removing ...
	I0229 18:47:18.961001   44399 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem
	I0229 18:47:18.961021   44399 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem (1123 bytes)
	I0229 18:47:18.961081   44399 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem, removing ...
	I0229 18:47:18.961088   44399 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem
	I0229 18:47:18.961112   44399 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem (1675 bytes)
	I0229 18:47:18.961194   44399 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-631080 san=[192.168.83.214 192.168.83.214 localhost 127.0.0.1 minikube old-k8s-version-631080]
	I0229 18:47:19.135550   44399 provision.go:172] copyRemoteCerts
	I0229 18:47:19.135634   44399 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 18:47:19.135662   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHHostname
	I0229 18:47:19.138560   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:19.138941   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:47:10 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:47:19.138972   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:19.139225   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHPort
	I0229 18:47:19.139414   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:47:19.139616   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHUsername
	I0229 18:47:19.139792   44399 sshutil.go:53] new ssh client: &{IP:192.168.83.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/old-k8s-version-631080/id_rsa Username:docker}
	I0229 18:47:19.231908   44399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 18:47:19.264773   44399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0229 18:47:19.298215   44399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0229 18:47:19.331349   44399 provision.go:86] duration metric: configureAuth took 377.321285ms
	I0229 18:47:19.331380   44399 buildroot.go:189] setting minikube options for container-runtime
	I0229 18:47:19.331565   44399 config.go:182] Loaded profile config "old-k8s-version-631080": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0229 18:47:19.331649   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHHostname
	I0229 18:47:19.334726   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:19.335015   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:47:10 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:47:19.335082   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:19.335285   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHPort
	I0229 18:47:19.335487   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:47:19.335675   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:47:19.335852   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHUsername
	I0229 18:47:19.336026   44399 main.go:141] libmachine: Using SSH client type: native
	I0229 18:47:19.336181   44399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.83.214 22 <nil> <nil>}
	I0229 18:47:19.336198   44399 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0229 18:47:19.672748   44399 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0229 18:47:19.672796   44399 main.go:141] libmachine: Checking connection to Docker...
	I0229 18:47:19.672807   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetURL
	I0229 18:47:19.674145   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | Using libvirt version 6000000
	I0229 18:47:19.676856   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:19.677213   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:47:10 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:47:19.677245   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:19.677465   44399 main.go:141] libmachine: Docker is up and running!
	I0229 18:47:19.677482   44399 main.go:141] libmachine: Reticulating splines...
	I0229 18:47:19.677490   44399 client.go:171] LocalClient.Create took 25.205493908s
	I0229 18:47:19.677520   44399 start.go:167] duration metric: libmachine.API.Create for "old-k8s-version-631080" took 25.205561905s
	I0229 18:47:19.677553   44399 start.go:300] post-start starting for "old-k8s-version-631080" (driver="kvm2")
	I0229 18:47:19.677571   44399 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 18:47:19.677606   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .DriverName
	I0229 18:47:19.677840   44399 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 18:47:19.677880   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHHostname
	I0229 18:47:19.680494   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:19.680953   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:47:10 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:47:19.680982   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:19.681169   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHPort
	I0229 18:47:19.681386   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:47:19.681577   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHUsername
	I0229 18:47:19.681774   44399 sshutil.go:53] new ssh client: &{IP:192.168.83.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/old-k8s-version-631080/id_rsa Username:docker}
	I0229 18:47:19.774677   44399 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 18:47:19.780268   44399 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 18:47:19.780305   44399 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6428/.minikube/addons for local assets ...
	I0229 18:47:19.780372   44399 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6428/.minikube/files for local assets ...
	I0229 18:47:19.780464   44399 filesync.go:149] local asset: /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem -> 136512.pem in /etc/ssl/certs
	I0229 18:47:19.780560   44399 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 18:47:19.793462   44399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem --> /etc/ssl/certs/136512.pem (1708 bytes)
	I0229 18:47:19.827834   44399 start.go:303] post-start completed in 150.26432ms
	I0229 18:47:19.827888   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetConfigRaw
	I0229 18:47:19.828617   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetIP
	I0229 18:47:19.831703   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:19.832101   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:47:10 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:47:19.832156   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:19.832447   44399 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/config.json ...
	I0229 18:47:19.832629   44399 start.go:128] duration metric: createHost completed in 25.379025184s
	I0229 18:47:19.832655   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHHostname
	I0229 18:47:19.835297   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:19.835694   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:47:10 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:47:19.835724   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:19.835936   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHPort
	I0229 18:47:19.836166   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:47:19.836336   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:47:19.836502   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHUsername
	I0229 18:47:19.836727   44399 main.go:141] libmachine: Using SSH client type: native
	I0229 18:47:19.836929   44399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.83.214 22 <nil> <nil>}
	I0229 18:47:19.836950   44399 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0229 18:47:19.960285   44399 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709232439.945419352
	
	I0229 18:47:19.960317   44399 fix.go:206] guest clock: 1709232439.945419352
	I0229 18:47:19.960326   44399 fix.go:219] Guest: 2024-02-29 18:47:19.945419352 +0000 UTC Remote: 2024-02-29 18:47:19.832640557 +0000 UTC m=+25.510768927 (delta=112.778795ms)
	I0229 18:47:19.960359   44399 fix.go:190] guest clock delta is within tolerance: 112.778795ms
	I0229 18:47:19.960373   44399 start.go:83] releasing machines lock for "old-k8s-version-631080", held for 25.506866987s
	I0229 18:47:19.960402   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .DriverName
	I0229 18:47:19.960711   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetIP
	I0229 18:47:19.963691   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:19.964054   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:47:10 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:47:19.964091   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:19.964269   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .DriverName
	I0229 18:47:19.964882   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .DriverName
	I0229 18:47:19.965100   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .DriverName
	I0229 18:47:19.965195   44399 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 18:47:19.965240   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHHostname
	I0229 18:47:19.965332   44399 ssh_runner.go:195] Run: cat /version.json
	I0229 18:47:19.965358   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHHostname
	I0229 18:47:19.967874   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:19.968255   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:19.968285   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:47:10 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:47:19.968305   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:19.968559   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHPort
	I0229 18:47:19.968602   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:47:10 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:47:19.968668   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:19.968768   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:47:19.968968   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHPort
	I0229 18:47:19.968970   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHUsername
	I0229 18:47:19.969139   44399 sshutil.go:53] new ssh client: &{IP:192.168.83.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/old-k8s-version-631080/id_rsa Username:docker}
	I0229 18:47:19.969180   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:47:19.969318   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHUsername
	I0229 18:47:19.969424   44399 sshutil.go:53] new ssh client: &{IP:192.168.83.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/old-k8s-version-631080/id_rsa Username:docker}
	I0229 18:47:20.053589   44399 ssh_runner.go:195] Run: systemctl --version
	I0229 18:47:20.077103   44399 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0229 18:47:20.252649   44399 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 18:47:20.261187   44399 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 18:47:20.261247   44399 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 18:47:20.283948   44399 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 18:47:20.283971   44399 start.go:475] detecting cgroup driver to use...
	I0229 18:47:20.284054   44399 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 18:47:20.307439   44399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 18:47:20.323544   44399 docker.go:217] disabling cri-docker service (if available) ...
	I0229 18:47:20.323623   44399 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 18:47:20.340633   44399 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 18:47:20.357816   44399 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 18:47:20.500836   44399 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 18:47:20.670745   44399 docker.go:233] disabling docker service ...
	I0229 18:47:20.670818   44399 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 18:47:20.693963   44399 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 18:47:20.709322   44399 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 18:47:20.857494   44399 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 18:47:20.974334   44399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 18:47:20.989627   44399 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 18:47:21.011314   44399 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0229 18:47:21.011383   44399 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:47:21.023324   44399 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0229 18:47:21.023376   44399 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:47:21.034944   44399 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:47:21.047132   44399 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:47:21.058481   44399 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 18:47:21.069981   44399 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 18:47:21.080871   44399 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 18:47:21.080937   44399 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0229 18:47:21.094616   44399 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 18:47:21.104898   44399 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:47:21.218429   44399 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0229 18:47:21.365983   44399 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0229 18:47:21.366055   44399 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0229 18:47:21.371683   44399 start.go:543] Will wait 60s for crictl version
	I0229 18:47:21.371735   44399 ssh_runner.go:195] Run: which crictl
	I0229 18:47:21.376150   44399 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 18:47:21.411742   44399 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0229 18:47:21.411815   44399 ssh_runner.go:195] Run: crio --version
	I0229 18:47:21.445410   44399 ssh_runner.go:195] Run: crio --version
	I0229 18:47:21.479306   44399 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.29.1 ...
	I0229 18:47:21.480717   44399 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetIP
	I0229 18:47:21.483538   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:21.484275   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:47:10 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:47:21.484313   44399 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:47:21.484389   44399 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0229 18:47:21.489479   44399 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:47:21.504264   44399 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0229 18:47:21.504323   44399 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 18:47:21.544230   44399 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0229 18:47:21.544292   44399 ssh_runner.go:195] Run: which lz4
	I0229 18:47:21.549054   44399 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0229 18:47:21.553913   44399 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 18:47:21.553942   44399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0229 18:47:23.377275   44399 crio.go:444] Took 1.828245 seconds to copy over tarball
	I0229 18:47:23.377346   44399 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 18:47:26.113551   44399 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.736173489s)
	I0229 18:47:26.113586   44399 crio.go:451] Took 2.736285 seconds to extract the tarball
	I0229 18:47:26.113598   44399 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 18:47:26.159257   44399 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 18:47:26.230903   44399 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0229 18:47:26.230931   44399 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0229 18:47:26.231011   44399 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:47:26.231322   44399 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0229 18:47:26.231335   44399 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0229 18:47:26.231463   44399 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0229 18:47:26.231522   44399 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 18:47:26.231651   44399 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 18:47:26.231721   44399 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 18:47:26.231836   44399 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 18:47:26.233270   44399 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 18:47:26.233322   44399 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 18:47:26.233338   44399 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 18:47:26.233270   44399 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0229 18:47:26.233269   44399 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 18:47:26.233537   44399 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0229 18:47:26.233581   44399 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:47:26.233604   44399 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0229 18:47:26.425313   44399 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0229 18:47:26.472944   44399 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0229 18:47:26.472976   44399 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0229 18:47:26.473012   44399 ssh_runner.go:195] Run: which crictl
	I0229 18:47:26.478466   44399 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0229 18:47:26.510985   44399 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0229 18:47:26.512523   44399 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0229 18:47:26.512615   44399 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0229 18:47:26.514955   44399 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 18:47:26.517056   44399 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0229 18:47:26.518110   44399 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0229 18:47:26.521570   44399 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0229 18:47:26.676126   44399 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0229 18:47:26.676160   44399 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 18:47:26.676202   44399 ssh_runner.go:195] Run: which crictl
	I0229 18:47:26.700309   44399 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0229 18:47:26.700353   44399 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 18:47:26.700407   44399 ssh_runner.go:195] Run: which crictl
	I0229 18:47:26.711695   44399 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0229 18:47:26.711734   44399 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 18:47:26.711768   44399 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0229 18:47:26.711779   44399 ssh_runner.go:195] Run: which crictl
	I0229 18:47:26.711810   44399 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 18:47:26.711862   44399 ssh_runner.go:195] Run: which crictl
	I0229 18:47:26.712901   44399 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0229 18:47:26.712937   44399 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0229 18:47:26.712970   44399 ssh_runner.go:195] Run: which crictl
	I0229 18:47:26.719226   44399 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0229 18:47:26.719242   44399 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0229 18:47:26.719576   44399 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0229 18:47:26.720853   44399 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0229 18:47:26.720876   44399 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 18:47:26.720880   44399 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0229 18:47:26.720910   44399 ssh_runner.go:195] Run: which crictl
	I0229 18:47:26.726539   44399 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0229 18:47:26.861202   44399 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0229 18:47:26.861239   44399 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0229 18:47:26.861320   44399 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0229 18:47:26.861384   44399 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0229 18:47:26.861423   44399 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0229 18:47:26.861442   44399 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0229 18:47:26.899807   44399 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0229 18:47:27.192295   44399 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:47:27.346152   44399 cache_images.go:92] LoadImages completed in 1.115199635s
	W0229 18:47:27.346250   44399 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2: no such file or directory
	I0229 18:47:27.346330   44399 ssh_runner.go:195] Run: crio config
	I0229 18:47:27.415867   44399 cni.go:84] Creating CNI manager for ""
	I0229 18:47:27.415890   44399 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 18:47:27.415909   44399 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 18:47:27.415932   44399 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.214 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-631080 NodeName:old-k8s-version-631080 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.214"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.214 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0229 18:47:27.416083   44399 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.214
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-631080"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.214
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.214"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-631080
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.83.214:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 18:47:27.416173   44399 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-631080 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.83.214
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-631080 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 18:47:27.416233   44399 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0229 18:47:27.428417   44399 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 18:47:27.428489   44399 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 18:47:27.440371   44399 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0229 18:47:27.461663   44399 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 18:47:27.481045   44399 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I0229 18:47:27.500501   44399 ssh_runner.go:195] Run: grep 192.168.83.214	control-plane.minikube.internal$ /etc/hosts
	I0229 18:47:27.505078   44399 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.214	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:47:27.520460   44399 certs.go:56] Setting up /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080 for IP: 192.168.83.214
	I0229 18:47:27.520497   44399 certs.go:190] acquiring lock for shared ca certs: {Name:mk3505d2468da66ac20dc3ebf913782cecf1ba0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:47:27.520650   44399 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18259-6428/.minikube/ca.key
	I0229 18:47:27.520707   44399 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.key
	I0229 18:47:27.520766   44399 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/client.key
	I0229 18:47:27.520784   44399 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/client.crt with IP's: []
	I0229 18:47:27.864189   44399 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/client.crt ...
	I0229 18:47:27.864218   44399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/client.crt: {Name:mk8fd53eb0b8d5b17fbea8f891f6884eeff3e169 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:47:27.864375   44399 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/client.key ...
	I0229 18:47:27.864388   44399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/client.key: {Name:mkae5ee58641b4deefdd16ee54eec9cef558c1be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:47:27.864459   44399 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/apiserver.key.89a58109
	I0229 18:47:27.864474   44399 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/apiserver.crt.89a58109 with IP's: [192.168.83.214 10.96.0.1 127.0.0.1 10.0.0.1]
	I0229 18:47:27.938293   44399 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/apiserver.crt.89a58109 ...
	I0229 18:47:27.938324   44399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/apiserver.crt.89a58109: {Name:mk03c2e3b7b0f5688b90b82a5d3b6a3e198d646f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:47:27.938503   44399 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/apiserver.key.89a58109 ...
	I0229 18:47:27.938539   44399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/apiserver.key.89a58109: {Name:mkf8eea34d8de97d5d0f70aeb5b2b830c1240c1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:47:27.938662   44399 certs.go:337] copying /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/apiserver.crt.89a58109 -> /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/apiserver.crt
	I0229 18:47:27.938755   44399 certs.go:341] copying /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/apiserver.key.89a58109 -> /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/apiserver.key
	I0229 18:47:27.938834   44399 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/proxy-client.key
	I0229 18:47:27.938856   44399 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/proxy-client.crt with IP's: []
	I0229 18:47:28.153554   44399 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/proxy-client.crt ...
	I0229 18:47:28.153585   44399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/proxy-client.crt: {Name:mkef1cb59c0851a60f74685c02d6c4b49a29cffd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:47:28.153758   44399 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/proxy-client.key ...
	I0229 18:47:28.153775   44399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/proxy-client.key: {Name:mk4c5236bac24ddb7c6a48fbc9c96d9664cc4ed7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:47:28.153996   44399 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651.pem (1338 bytes)
	W0229 18:47:28.154048   44399 certs.go:433] ignoring /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651_empty.pem, impossibly tiny 0 bytes
	I0229 18:47:28.154064   44399 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem (1675 bytes)
	I0229 18:47:28.154094   44399 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem (1082 bytes)
	I0229 18:47:28.154134   44399 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem (1123 bytes)
	I0229 18:47:28.154168   44399 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem (1675 bytes)
	I0229 18:47:28.154226   44399 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem (1708 bytes)
	I0229 18:47:28.154826   44399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 18:47:28.186389   44399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0229 18:47:28.215574   44399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 18:47:28.245800   44399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 18:47:28.303841   44399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 18:47:28.335355   44399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0229 18:47:28.363388   44399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 18:47:28.393630   44399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0229 18:47:28.423875   44399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651.pem --> /usr/share/ca-certificates/13651.pem (1338 bytes)
	I0229 18:47:28.454377   44399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem --> /usr/share/ca-certificates/136512.pem (1708 bytes)
	I0229 18:47:28.483445   44399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 18:47:28.511360   44399 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 18:47:28.530599   44399 ssh_runner.go:195] Run: openssl version
	I0229 18:47:28.537246   44399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 18:47:28.549806   44399 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:47:28.555377   44399 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 17:40 /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:47:28.555437   44399 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:47:28.563093   44399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 18:47:28.578569   44399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13651.pem && ln -fs /usr/share/ca-certificates/13651.pem /etc/ssl/certs/13651.pem"
	I0229 18:47:28.593874   44399 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13651.pem
	I0229 18:47:28.600519   44399 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 17:49 /usr/share/ca-certificates/13651.pem
	I0229 18:47:28.600560   44399 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13651.pem
	I0229 18:47:28.607700   44399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13651.pem /etc/ssl/certs/51391683.0"
	I0229 18:47:28.623722   44399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136512.pem && ln -fs /usr/share/ca-certificates/136512.pem /etc/ssl/certs/136512.pem"
	I0229 18:47:28.636281   44399 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136512.pem
	I0229 18:47:28.641592   44399 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 17:49 /usr/share/ca-certificates/136512.pem
	I0229 18:47:28.641642   44399 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136512.pem
	I0229 18:47:28.649008   44399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136512.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 18:47:28.661558   44399 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 18:47:28.666198   44399 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 18:47:28.666249   44399 kubeadm.go:404] StartCluster: {Name:old-k8s-version-631080 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-631080 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.83.214 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:47:28.666317   44399 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0229 18:47:28.666358   44399 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 18:47:28.716681   44399 cri.go:89] found id: ""
	I0229 18:47:28.716742   44399 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 18:47:28.728300   44399 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 18:47:28.739080   44399 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 18:47:28.749442   44399 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 18:47:28.749488   44399 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 18:47:28.881883   44399 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 18:47:28.881987   44399 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 18:47:29.160390   44399 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 18:47:29.160580   44399 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 18:47:29.160724   44399 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 18:47:29.448897   44399 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 18:47:29.451654   44399 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 18:47:29.462589   44399 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 18:47:29.601848   44399 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 18:47:29.603554   44399 out.go:204]   - Generating certificates and keys ...
	I0229 18:47:29.603696   44399 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 18:47:29.603821   44399 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 18:47:29.908358   44399 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0229 18:47:30.012736   44399 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0229 18:47:30.271926   44399 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0229 18:47:30.387535   44399 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0229 18:47:30.445345   44399 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0229 18:47:30.445529   44399 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [old-k8s-version-631080 localhost] and IPs [192.168.83.214 127.0.0.1 ::1]
	I0229 18:47:30.689130   44399 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0229 18:47:30.689407   44399 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-631080 localhost] and IPs [192.168.83.214 127.0.0.1 ::1]
	I0229 18:47:30.818464   44399 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0229 18:47:31.124163   44399 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0229 18:47:31.324195   44399 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0229 18:47:31.324565   44399 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 18:47:31.499366   44399 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 18:47:31.728125   44399 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 18:47:31.948091   44399 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 18:47:32.121471   44399 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 18:47:32.122506   44399 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 18:47:32.124299   44399 out.go:204]   - Booting up control plane ...
	I0229 18:47:32.124427   44399 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 18:47:32.131277   44399 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 18:47:32.132287   44399 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 18:47:32.135893   44399 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 18:47:32.141145   44399 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 18:48:12.139145   44399 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 18:48:12.140170   44399 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:48:12.140356   44399 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:48:17.141006   44399 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:48:17.141314   44399 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:48:27.141887   44399 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:48:27.142197   44399 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:48:47.143034   44399 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:48:47.143289   44399 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:49:27.143225   44399 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:49:27.143506   44399 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:49:27.143530   44399 kubeadm.go:322] 
	I0229 18:49:27.143566   44399 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 18:49:27.143598   44399 kubeadm.go:322] 	timed out waiting for the condition
	I0229 18:49:27.143607   44399 kubeadm.go:322] 
	I0229 18:49:27.143651   44399 kubeadm.go:322] This error is likely caused by:
	I0229 18:49:27.143699   44399 kubeadm.go:322] 	- The kubelet is not running
	I0229 18:49:27.143829   44399 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 18:49:27.143838   44399 kubeadm.go:322] 
	I0229 18:49:27.143945   44399 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 18:49:27.144032   44399 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 18:49:27.144090   44399 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 18:49:27.144101   44399 kubeadm.go:322] 
	I0229 18:49:27.144259   44399 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 18:49:27.144397   44399 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 18:49:27.144519   44399 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 18:49:27.144591   44399 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 18:49:27.144703   44399 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 18:49:27.144754   44399 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0229 18:49:27.145247   44399 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 18:49:27.145372   44399 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 18:49:27.145459   44399 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0229 18:49:27.145630   44399 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-631080 localhost] and IPs [192.168.83.214 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-631080 localhost] and IPs [192.168.83.214 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-631080 localhost] and IPs [192.168.83.214 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-631080 localhost] and IPs [192.168.83.214 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0229 18:49:27.145697   44399 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0229 18:49:27.618942   44399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 18:49:27.637677   44399 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 18:49:27.648309   44399 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 18:49:27.648370   44399 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 18:49:27.846736   44399 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 18:51:24.204624   44399 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 18:51:24.204745   44399 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 18:51:24.206345   44399 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 18:51:24.206421   44399 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 18:51:24.206524   44399 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 18:51:24.206665   44399 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 18:51:24.206791   44399 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 18:51:24.206878   44399 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 18:51:24.207005   44399 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 18:51:24.207093   44399 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 18:51:24.207196   44399 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 18:51:24.209099   44399 out.go:204]   - Generating certificates and keys ...
	I0229 18:51:24.209178   44399 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 18:51:24.209233   44399 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 18:51:24.209315   44399 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 18:51:24.209384   44399 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 18:51:24.209489   44399 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 18:51:24.209584   44399 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 18:51:24.209684   44399 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 18:51:24.209756   44399 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 18:51:24.209840   44399 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 18:51:24.209931   44399 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 18:51:24.209991   44399 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 18:51:24.210080   44399 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 18:51:24.210152   44399 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 18:51:24.210197   44399 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 18:51:24.210255   44399 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 18:51:24.210305   44399 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 18:51:24.210364   44399 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 18:51:24.212074   44399 out.go:204]   - Booting up control plane ...
	I0229 18:51:24.212147   44399 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 18:51:24.212236   44399 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 18:51:24.212308   44399 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 18:51:24.212383   44399 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 18:51:24.212584   44399 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 18:51:24.212652   44399 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 18:51:24.212714   44399 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:51:24.212863   44399 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:51:24.212940   44399 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:51:24.213160   44399 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:51:24.213220   44399 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:51:24.213414   44399 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:51:24.213497   44399 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:51:24.213695   44399 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:51:24.213762   44399 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:51:24.213977   44399 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:51:24.213988   44399 kubeadm.go:322] 
	I0229 18:51:24.214046   44399 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 18:51:24.214097   44399 kubeadm.go:322] 	timed out waiting for the condition
	I0229 18:51:24.214107   44399 kubeadm.go:322] 
	I0229 18:51:24.214152   44399 kubeadm.go:322] This error is likely caused by:
	I0229 18:51:24.214180   44399 kubeadm.go:322] 	- The kubelet is not running
	I0229 18:51:24.214262   44399 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 18:51:24.214272   44399 kubeadm.go:322] 
	I0229 18:51:24.214352   44399 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 18:51:24.214379   44399 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 18:51:24.214407   44399 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 18:51:24.214424   44399 kubeadm.go:322] 
	I0229 18:51:24.214506   44399 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 18:51:24.214585   44399 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 18:51:24.214665   44399 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 18:51:24.214707   44399 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 18:51:24.214766   44399 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 18:51:24.214875   44399 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0229 18:51:24.214873   44399 kubeadm.go:406] StartCluster complete in 3m55.548634983s
	I0229 18:51:24.214926   44399 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:51:24.214976   44399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:51:24.258872   44399 cri.go:89] found id: ""
	I0229 18:51:24.258900   44399 logs.go:276] 0 containers: []
	W0229 18:51:24.258911   44399 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:51:24.258919   44399 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:51:24.258983   44399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:51:24.295492   44399 cri.go:89] found id: ""
	I0229 18:51:24.295521   44399 logs.go:276] 0 containers: []
	W0229 18:51:24.295532   44399 logs.go:278] No container was found matching "etcd"
	I0229 18:51:24.295539   44399 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:51:24.295601   44399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:51:24.334265   44399 cri.go:89] found id: ""
	I0229 18:51:24.334290   44399 logs.go:276] 0 containers: []
	W0229 18:51:24.334298   44399 logs.go:278] No container was found matching "coredns"
	I0229 18:51:24.334303   44399 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:51:24.334346   44399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:51:24.378919   44399 cri.go:89] found id: ""
	I0229 18:51:24.378972   44399 logs.go:276] 0 containers: []
	W0229 18:51:24.378992   44399 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:51:24.379005   44399 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:51:24.379116   44399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:51:24.443896   44399 cri.go:89] found id: ""
	I0229 18:51:24.443929   44399 logs.go:276] 0 containers: []
	W0229 18:51:24.443938   44399 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:51:24.443945   44399 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:51:24.443996   44399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:51:24.478625   44399 cri.go:89] found id: ""
	I0229 18:51:24.478658   44399 logs.go:276] 0 containers: []
	W0229 18:51:24.478666   44399 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:51:24.478672   44399 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:51:24.478721   44399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:51:24.514693   44399 cri.go:89] found id: ""
	I0229 18:51:24.514723   44399 logs.go:276] 0 containers: []
	W0229 18:51:24.514742   44399 logs.go:278] No container was found matching "kindnet"
	I0229 18:51:24.514754   44399 logs.go:123] Gathering logs for kubelet ...
	I0229 18:51:24.514774   44399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:51:24.561120   44399 logs.go:123] Gathering logs for dmesg ...
	I0229 18:51:24.561153   44399 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:51:24.576352   44399 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:51:24.576376   44399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:51:24.701431   44399 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:51:24.701459   44399 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:51:24.701478   44399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:51:24.792147   44399 logs.go:123] Gathering logs for container status ...
	I0229 18:51:24.792181   44399 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0229 18:51:24.841631   44399 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0229 18:51:24.841681   44399 out.go:239] * 
	* 
	W0229 18:51:24.841744   44399 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 18:51:24.841777   44399 out.go:239] * 
	* 
	W0229 18:51:24.842682   44399 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 18:51:24.845886   44399 out.go:177] 
	W0229 18:51:24.847610   44399 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 18:51:24.847659   44399 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0229 18:51:24.847681   44399 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0229 18:51:24.849300   44399 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-631080 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-631080 -n old-k8s-version-631080
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-631080 -n old-k8s-version-631080: exit status 6 (254.384339ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 18:51:25.138058   47039 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-631080" does not appear in /home/jenkins/minikube-integration/18259-6428/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-631080" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (270.84s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (72.04s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-848791 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-848791 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m7.829048391s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-848791] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18259
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18259-6428/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6428/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting control plane node pause-848791 in cluster pause-848791
	* Updating the running kvm2 "pause-848791" VM ...
	* Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-848791" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 18:46:58.282770   44536 out.go:291] Setting OutFile to fd 1 ...
	I0229 18:46:58.283346   44536 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:46:58.283362   44536 out.go:304] Setting ErrFile to fd 2...
	I0229 18:46:58.283370   44536 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:46:58.283876   44536 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6428/.minikube/bin
	I0229 18:46:58.284793   44536 out.go:298] Setting JSON to false
	I0229 18:46:58.286762   44536 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5362,"bootTime":1709227056,"procs":227,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 18:46:58.286890   44536 start.go:139] virtualization: kvm guest
	I0229 18:46:58.423206   44536 out.go:177] * [pause-848791] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 18:46:58.639549   44536 notify.go:220] Checking for updates...
	I0229 18:46:58.752037   44536 out.go:177]   - MINIKUBE_LOCATION=18259
	I0229 18:46:58.893624   44536 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 18:46:58.905451   44536 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18259-6428/kubeconfig
	I0229 18:46:58.996865   44536 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6428/.minikube
	I0229 18:46:58.998192   44536 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 18:46:58.999669   44536 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 18:46:59.001706   44536 config.go:182] Loaded profile config "pause-848791": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 18:46:59.002314   44536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 18:46:59.002376   44536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:46:59.018636   44536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43711
	I0229 18:46:59.019150   44536 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:46:59.019686   44536 main.go:141] libmachine: Using API Version  1
	I0229 18:46:59.019710   44536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:46:59.020059   44536 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:46:59.020221   44536 main.go:141] libmachine: (pause-848791) Calling .DriverName
	I0229 18:46:59.020532   44536 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 18:46:59.020953   44536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 18:46:59.020993   44536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:46:59.034912   44536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41645
	I0229 18:46:59.035326   44536 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:46:59.035793   44536 main.go:141] libmachine: Using API Version  1
	I0229 18:46:59.035818   44536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:46:59.036105   44536 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:46:59.036296   44536 main.go:141] libmachine: (pause-848791) Calling .DriverName
	I0229 18:46:59.071597   44536 out.go:177] * Using the kvm2 driver based on existing profile
	I0229 18:46:59.072938   44536 start.go:299] selected driver: kvm2
	I0229 18:46:59.072958   44536 start.go:903] validating driver "kvm2" against &{Name:pause-848791 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:pause-848791 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.95 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installe
r:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:46:59.073114   44536 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 18:46:59.073524   44536 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:46:59.073609   44536 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18259-6428/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 18:46:59.088244   44536 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 18:46:59.089330   44536 cni.go:84] Creating CNI manager for ""
	I0229 18:46:59.089350   44536 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 18:46:59.089362   44536 start_flags.go:323] config:
	{Name:pause-848791 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:pause-848791 Namespace:default APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.95 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false por
tainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:46:59.089613   44536 iso.go:125] acquiring lock: {Name:mk4b55cee5696fff78a1389276e4e011ad655e56 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:46:59.091923   44536 out.go:177] * Starting control plane node pause-848791 in cluster pause-848791
	I0229 18:46:59.093207   44536 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0229 18:46:59.093250   44536 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0229 18:46:59.093278   44536 cache.go:56] Caching tarball of preloaded images
	I0229 18:46:59.093372   44536 preload.go:174] Found /home/jenkins/minikube-integration/18259-6428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0229 18:46:59.093384   44536 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0229 18:46:59.093536   44536 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/pause-848791/config.json ...
	I0229 18:46:59.093754   44536 start.go:365] acquiring machines lock for pause-848791: {Name:mk08b242c6b6b7cd4a6843a7d3821a8b435540dd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 18:47:19.960432   44536 start.go:369] acquired machines lock for "pause-848791" in 20.866650965s
	I0229 18:47:19.960492   44536 start.go:96] Skipping create...Using existing machine configuration
	I0229 18:47:19.960498   44536 fix.go:54] fixHost starting: 
	I0229 18:47:19.960919   44536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 18:47:19.960967   44536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:47:19.980199   44536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41409
	I0229 18:47:19.980728   44536 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:47:19.981245   44536 main.go:141] libmachine: Using API Version  1
	I0229 18:47:19.981269   44536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:47:19.981688   44536 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:47:19.981876   44536 main.go:141] libmachine: (pause-848791) Calling .DriverName
	I0229 18:47:19.982020   44536 main.go:141] libmachine: (pause-848791) Calling .GetState
	I0229 18:47:19.983642   44536 fix.go:102] recreateIfNeeded on pause-848791: state=Running err=<nil>
	W0229 18:47:19.983664   44536 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 18:47:19.985537   44536 out.go:177] * Updating the running kvm2 "pause-848791" VM ...
	I0229 18:47:19.986847   44536 machine.go:88] provisioning docker machine ...
	I0229 18:47:19.986875   44536 main.go:141] libmachine: (pause-848791) Calling .DriverName
	I0229 18:47:19.987076   44536 main.go:141] libmachine: (pause-848791) Calling .GetMachineName
	I0229 18:47:19.987230   44536 buildroot.go:166] provisioning hostname "pause-848791"
	I0229 18:47:19.987244   44536 main.go:141] libmachine: (pause-848791) Calling .GetMachineName
	I0229 18:47:19.987387   44536 main.go:141] libmachine: (pause-848791) Calling .GetSSHHostname
	I0229 18:47:19.989955   44536 main.go:141] libmachine: (pause-848791) DBG | domain pause-848791 has defined MAC address 52:54:00:00:ed:d1 in network mk-pause-848791
	I0229 18:47:19.990378   44536 main.go:141] libmachine: (pause-848791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:ed:d1", ip: ""} in network mk-pause-848791: {Iface:virbr1 ExpiryTime:2024-02-29 19:45:31 +0000 UTC Type:0 Mac:52:54:00:00:ed:d1 Iaid: IPaddr:192.168.72.95 Prefix:24 Hostname:pause-848791 Clientid:01:52:54:00:00:ed:d1}
	I0229 18:47:19.990414   44536 main.go:141] libmachine: (pause-848791) DBG | domain pause-848791 has defined IP address 192.168.72.95 and MAC address 52:54:00:00:ed:d1 in network mk-pause-848791
	I0229 18:47:19.990614   44536 main.go:141] libmachine: (pause-848791) Calling .GetSSHPort
	I0229 18:47:19.990793   44536 main.go:141] libmachine: (pause-848791) Calling .GetSSHKeyPath
	I0229 18:47:19.990991   44536 main.go:141] libmachine: (pause-848791) Calling .GetSSHKeyPath
	I0229 18:47:19.991136   44536 main.go:141] libmachine: (pause-848791) Calling .GetSSHUsername
	I0229 18:47:19.991321   44536 main.go:141] libmachine: Using SSH client type: native
	I0229 18:47:19.991590   44536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.95 22 <nil> <nil>}
	I0229 18:47:19.991611   44536 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-848791 && echo "pause-848791" | sudo tee /etc/hostname
	I0229 18:47:20.114872   44536 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-848791
	
	I0229 18:47:20.114908   44536 main.go:141] libmachine: (pause-848791) Calling .GetSSHHostname
	I0229 18:47:20.118232   44536 main.go:141] libmachine: (pause-848791) DBG | domain pause-848791 has defined MAC address 52:54:00:00:ed:d1 in network mk-pause-848791
	I0229 18:47:20.118523   44536 main.go:141] libmachine: (pause-848791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:ed:d1", ip: ""} in network mk-pause-848791: {Iface:virbr1 ExpiryTime:2024-02-29 19:45:31 +0000 UTC Type:0 Mac:52:54:00:00:ed:d1 Iaid: IPaddr:192.168.72.95 Prefix:24 Hostname:pause-848791 Clientid:01:52:54:00:00:ed:d1}
	I0229 18:47:20.118551   44536 main.go:141] libmachine: (pause-848791) DBG | domain pause-848791 has defined IP address 192.168.72.95 and MAC address 52:54:00:00:ed:d1 in network mk-pause-848791
	I0229 18:47:20.118830   44536 main.go:141] libmachine: (pause-848791) Calling .GetSSHPort
	I0229 18:47:20.119167   44536 main.go:141] libmachine: (pause-848791) Calling .GetSSHKeyPath
	I0229 18:47:20.119319   44536 main.go:141] libmachine: (pause-848791) Calling .GetSSHKeyPath
	I0229 18:47:20.119489   44536 main.go:141] libmachine: (pause-848791) Calling .GetSSHUsername
	I0229 18:47:20.119700   44536 main.go:141] libmachine: Using SSH client type: native
	I0229 18:47:20.119949   44536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.95 22 <nil> <nil>}
	I0229 18:47:20.119975   44536 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-848791' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-848791/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-848791' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 18:47:20.244822   44536 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 18:47:20.244854   44536 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18259-6428/.minikube CaCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18259-6428/.minikube}
	I0229 18:47:20.244881   44536 buildroot.go:174] setting up certificates
	I0229 18:47:20.244891   44536 provision.go:83] configureAuth start
	I0229 18:47:20.244901   44536 main.go:141] libmachine: (pause-848791) Calling .GetMachineName
	I0229 18:47:20.245215   44536 main.go:141] libmachine: (pause-848791) Calling .GetIP
	I0229 18:47:20.248167   44536 main.go:141] libmachine: (pause-848791) DBG | domain pause-848791 has defined MAC address 52:54:00:00:ed:d1 in network mk-pause-848791
	I0229 18:47:20.248531   44536 main.go:141] libmachine: (pause-848791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:ed:d1", ip: ""} in network mk-pause-848791: {Iface:virbr1 ExpiryTime:2024-02-29 19:45:31 +0000 UTC Type:0 Mac:52:54:00:00:ed:d1 Iaid: IPaddr:192.168.72.95 Prefix:24 Hostname:pause-848791 Clientid:01:52:54:00:00:ed:d1}
	I0229 18:47:20.248567   44536 main.go:141] libmachine: (pause-848791) DBG | domain pause-848791 has defined IP address 192.168.72.95 and MAC address 52:54:00:00:ed:d1 in network mk-pause-848791
	I0229 18:47:20.248729   44536 main.go:141] libmachine: (pause-848791) Calling .GetSSHHostname
	I0229 18:47:20.251491   44536 main.go:141] libmachine: (pause-848791) DBG | domain pause-848791 has defined MAC address 52:54:00:00:ed:d1 in network mk-pause-848791
	I0229 18:47:20.251852   44536 main.go:141] libmachine: (pause-848791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:ed:d1", ip: ""} in network mk-pause-848791: {Iface:virbr1 ExpiryTime:2024-02-29 19:45:31 +0000 UTC Type:0 Mac:52:54:00:00:ed:d1 Iaid: IPaddr:192.168.72.95 Prefix:24 Hostname:pause-848791 Clientid:01:52:54:00:00:ed:d1}
	I0229 18:47:20.251895   44536 main.go:141] libmachine: (pause-848791) DBG | domain pause-848791 has defined IP address 192.168.72.95 and MAC address 52:54:00:00:ed:d1 in network mk-pause-848791
	I0229 18:47:20.252041   44536 provision.go:138] copyHostCerts
	I0229 18:47:20.252114   44536 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem, removing ...
	I0229 18:47:20.252134   44536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem
	I0229 18:47:20.252209   44536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem (1675 bytes)
	I0229 18:47:20.252346   44536 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem, removing ...
	I0229 18:47:20.252358   44536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem
	I0229 18:47:20.252391   44536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem (1082 bytes)
	I0229 18:47:20.252487   44536 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem, removing ...
	I0229 18:47:20.252498   44536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem
	I0229 18:47:20.252526   44536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem (1123 bytes)
	I0229 18:47:20.252631   44536 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem org=jenkins.pause-848791 san=[192.168.72.95 192.168.72.95 localhost 127.0.0.1 minikube pause-848791]
	I0229 18:47:20.337511   44536 provision.go:172] copyRemoteCerts
	I0229 18:47:20.337563   44536 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 18:47:20.337590   44536 main.go:141] libmachine: (pause-848791) Calling .GetSSHHostname
	I0229 18:47:20.340832   44536 main.go:141] libmachine: (pause-848791) DBG | domain pause-848791 has defined MAC address 52:54:00:00:ed:d1 in network mk-pause-848791
	I0229 18:47:20.341164   44536 main.go:141] libmachine: (pause-848791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:ed:d1", ip: ""} in network mk-pause-848791: {Iface:virbr1 ExpiryTime:2024-02-29 19:45:31 +0000 UTC Type:0 Mac:52:54:00:00:ed:d1 Iaid: IPaddr:192.168.72.95 Prefix:24 Hostname:pause-848791 Clientid:01:52:54:00:00:ed:d1}
	I0229 18:47:20.341201   44536 main.go:141] libmachine: (pause-848791) DBG | domain pause-848791 has defined IP address 192.168.72.95 and MAC address 52:54:00:00:ed:d1 in network mk-pause-848791
	I0229 18:47:20.341439   44536 main.go:141] libmachine: (pause-848791) Calling .GetSSHPort
	I0229 18:47:20.341666   44536 main.go:141] libmachine: (pause-848791) Calling .GetSSHKeyPath
	I0229 18:47:20.341972   44536 main.go:141] libmachine: (pause-848791) Calling .GetSSHUsername
	I0229 18:47:20.342139   44536 sshutil.go:53] new ssh client: &{IP:192.168.72.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/pause-848791/id_rsa Username:docker}
	I0229 18:47:20.433713   44536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0229 18:47:20.464741   44536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 18:47:20.494915   44536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0229 18:47:20.530916   44536 provision.go:86] duration metric: configureAuth took 286.011618ms
	I0229 18:47:20.530949   44536 buildroot.go:189] setting minikube options for container-runtime
	I0229 18:47:20.531280   44536 config.go:182] Loaded profile config "pause-848791": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 18:47:20.531376   44536 main.go:141] libmachine: (pause-848791) Calling .GetSSHHostname
	I0229 18:47:20.534690   44536 main.go:141] libmachine: (pause-848791) DBG | domain pause-848791 has defined MAC address 52:54:00:00:ed:d1 in network mk-pause-848791
	I0229 18:47:20.535196   44536 main.go:141] libmachine: (pause-848791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:ed:d1", ip: ""} in network mk-pause-848791: {Iface:virbr1 ExpiryTime:2024-02-29 19:45:31 +0000 UTC Type:0 Mac:52:54:00:00:ed:d1 Iaid: IPaddr:192.168.72.95 Prefix:24 Hostname:pause-848791 Clientid:01:52:54:00:00:ed:d1}
	I0229 18:47:20.535237   44536 main.go:141] libmachine: (pause-848791) DBG | domain pause-848791 has defined IP address 192.168.72.95 and MAC address 52:54:00:00:ed:d1 in network mk-pause-848791
	I0229 18:47:20.535426   44536 main.go:141] libmachine: (pause-848791) Calling .GetSSHPort
	I0229 18:47:20.535693   44536 main.go:141] libmachine: (pause-848791) Calling .GetSSHKeyPath
	I0229 18:47:20.535881   44536 main.go:141] libmachine: (pause-848791) Calling .GetSSHKeyPath
	I0229 18:47:20.536072   44536 main.go:141] libmachine: (pause-848791) Calling .GetSSHUsername
	I0229 18:47:20.536302   44536 main.go:141] libmachine: Using SSH client type: native
	I0229 18:47:20.536526   44536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.95 22 <nil> <nil>}
	I0229 18:47:20.536549   44536 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0229 18:47:28.615338   44536 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0229 18:47:28.615359   44536 machine.go:91] provisioned docker machine in 8.628498441s
	I0229 18:47:28.615372   44536 start.go:300] post-start starting for "pause-848791" (driver="kvm2")
	I0229 18:47:28.615382   44536 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 18:47:28.615409   44536 main.go:141] libmachine: (pause-848791) Calling .DriverName
	I0229 18:47:28.615730   44536 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 18:47:28.615769   44536 main.go:141] libmachine: (pause-848791) Calling .GetSSHHostname
	I0229 18:47:28.618920   44536 main.go:141] libmachine: (pause-848791) DBG | domain pause-848791 has defined MAC address 52:54:00:00:ed:d1 in network mk-pause-848791
	I0229 18:47:28.619414   44536 main.go:141] libmachine: (pause-848791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:ed:d1", ip: ""} in network mk-pause-848791: {Iface:virbr1 ExpiryTime:2024-02-29 19:45:31 +0000 UTC Type:0 Mac:52:54:00:00:ed:d1 Iaid: IPaddr:192.168.72.95 Prefix:24 Hostname:pause-848791 Clientid:01:52:54:00:00:ed:d1}
	I0229 18:47:28.619452   44536 main.go:141] libmachine: (pause-848791) DBG | domain pause-848791 has defined IP address 192.168.72.95 and MAC address 52:54:00:00:ed:d1 in network mk-pause-848791
	I0229 18:47:28.619746   44536 main.go:141] libmachine: (pause-848791) Calling .GetSSHPort
	I0229 18:47:28.619937   44536 main.go:141] libmachine: (pause-848791) Calling .GetSSHKeyPath
	I0229 18:47:28.620136   44536 main.go:141] libmachine: (pause-848791) Calling .GetSSHUsername
	I0229 18:47:28.620324   44536 sshutil.go:53] new ssh client: &{IP:192.168.72.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/pause-848791/id_rsa Username:docker}
	I0229 18:47:28.708382   44536 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 18:47:28.713618   44536 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 18:47:28.713649   44536 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6428/.minikube/addons for local assets ...
	I0229 18:47:28.713719   44536 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6428/.minikube/files for local assets ...
	I0229 18:47:28.713787   44536 filesync.go:149] local asset: /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem -> 136512.pem in /etc/ssl/certs
	I0229 18:47:28.713901   44536 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 18:47:28.725327   44536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem --> /etc/ssl/certs/136512.pem (1708 bytes)
	I0229 18:47:28.752879   44536 start.go:303] post-start completed in 137.496114ms
	I0229 18:47:28.752905   44536 fix.go:56] fixHost completed within 8.792406355s
	I0229 18:47:28.752930   44536 main.go:141] libmachine: (pause-848791) Calling .GetSSHHostname
	I0229 18:47:28.755749   44536 main.go:141] libmachine: (pause-848791) DBG | domain pause-848791 has defined MAC address 52:54:00:00:ed:d1 in network mk-pause-848791
	I0229 18:47:28.756142   44536 main.go:141] libmachine: (pause-848791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:ed:d1", ip: ""} in network mk-pause-848791: {Iface:virbr1 ExpiryTime:2024-02-29 19:45:31 +0000 UTC Type:0 Mac:52:54:00:00:ed:d1 Iaid: IPaddr:192.168.72.95 Prefix:24 Hostname:pause-848791 Clientid:01:52:54:00:00:ed:d1}
	I0229 18:47:28.756166   44536 main.go:141] libmachine: (pause-848791) DBG | domain pause-848791 has defined IP address 192.168.72.95 and MAC address 52:54:00:00:ed:d1 in network mk-pause-848791
	I0229 18:47:28.756304   44536 main.go:141] libmachine: (pause-848791) Calling .GetSSHPort
	I0229 18:47:28.756494   44536 main.go:141] libmachine: (pause-848791) Calling .GetSSHKeyPath
	I0229 18:47:28.756645   44536 main.go:141] libmachine: (pause-848791) Calling .GetSSHKeyPath
	I0229 18:47:28.756763   44536 main.go:141] libmachine: (pause-848791) Calling .GetSSHUsername
	I0229 18:47:28.756900   44536 main.go:141] libmachine: Using SSH client type: native
	I0229 18:47:28.757096   44536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.95 22 <nil> <nil>}
	I0229 18:47:28.757117   44536 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0229 18:47:28.864821   44536 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709232448.859541795
	
	I0229 18:47:28.864844   44536 fix.go:206] guest clock: 1709232448.859541795
	I0229 18:47:28.864853   44536 fix.go:219] Guest: 2024-02-29 18:47:28.859541795 +0000 UTC Remote: 2024-02-29 18:47:28.752910369 +0000 UTC m=+30.589786908 (delta=106.631426ms)
	I0229 18:47:28.864878   44536 fix.go:190] guest clock delta is within tolerance: 106.631426ms
	I0229 18:47:28.864893   44536 start.go:83] releasing machines lock for "pause-848791", held for 8.904416272s
	I0229 18:47:28.864923   44536 main.go:141] libmachine: (pause-848791) Calling .DriverName
	I0229 18:47:28.865211   44536 main.go:141] libmachine: (pause-848791) Calling .GetIP
	I0229 18:47:28.868322   44536 main.go:141] libmachine: (pause-848791) DBG | domain pause-848791 has defined MAC address 52:54:00:00:ed:d1 in network mk-pause-848791
	I0229 18:47:28.868734   44536 main.go:141] libmachine: (pause-848791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:ed:d1", ip: ""} in network mk-pause-848791: {Iface:virbr1 ExpiryTime:2024-02-29 19:45:31 +0000 UTC Type:0 Mac:52:54:00:00:ed:d1 Iaid: IPaddr:192.168.72.95 Prefix:24 Hostname:pause-848791 Clientid:01:52:54:00:00:ed:d1}
	I0229 18:47:28.868773   44536 main.go:141] libmachine: (pause-848791) DBG | domain pause-848791 has defined IP address 192.168.72.95 and MAC address 52:54:00:00:ed:d1 in network mk-pause-848791
	I0229 18:47:28.868950   44536 main.go:141] libmachine: (pause-848791) Calling .DriverName
	I0229 18:47:28.869608   44536 main.go:141] libmachine: (pause-848791) Calling .DriverName
	I0229 18:47:28.869799   44536 main.go:141] libmachine: (pause-848791) Calling .DriverName
	I0229 18:47:28.869910   44536 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 18:47:28.869956   44536 main.go:141] libmachine: (pause-848791) Calling .GetSSHHostname
	I0229 18:47:28.870250   44536 ssh_runner.go:195] Run: cat /version.json
	I0229 18:47:28.870275   44536 main.go:141] libmachine: (pause-848791) Calling .GetSSHHostname
	I0229 18:47:28.872964   44536 main.go:141] libmachine: (pause-848791) DBG | domain pause-848791 has defined MAC address 52:54:00:00:ed:d1 in network mk-pause-848791
	I0229 18:47:28.873019   44536 main.go:141] libmachine: (pause-848791) DBG | domain pause-848791 has defined MAC address 52:54:00:00:ed:d1 in network mk-pause-848791
	I0229 18:47:28.873346   44536 main.go:141] libmachine: (pause-848791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:ed:d1", ip: ""} in network mk-pause-848791: {Iface:virbr1 ExpiryTime:2024-02-29 19:45:31 +0000 UTC Type:0 Mac:52:54:00:00:ed:d1 Iaid: IPaddr:192.168.72.95 Prefix:24 Hostname:pause-848791 Clientid:01:52:54:00:00:ed:d1}
	I0229 18:47:28.873380   44536 main.go:141] libmachine: (pause-848791) DBG | domain pause-848791 has defined IP address 192.168.72.95 and MAC address 52:54:00:00:ed:d1 in network mk-pause-848791
	I0229 18:47:28.873409   44536 main.go:141] libmachine: (pause-848791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:ed:d1", ip: ""} in network mk-pause-848791: {Iface:virbr1 ExpiryTime:2024-02-29 19:45:31 +0000 UTC Type:0 Mac:52:54:00:00:ed:d1 Iaid: IPaddr:192.168.72.95 Prefix:24 Hostname:pause-848791 Clientid:01:52:54:00:00:ed:d1}
	I0229 18:47:28.873425   44536 main.go:141] libmachine: (pause-848791) DBG | domain pause-848791 has defined IP address 192.168.72.95 and MAC address 52:54:00:00:ed:d1 in network mk-pause-848791
	I0229 18:47:28.873606   44536 main.go:141] libmachine: (pause-848791) Calling .GetSSHPort
	I0229 18:47:28.873702   44536 main.go:141] libmachine: (pause-848791) Calling .GetSSHPort
	I0229 18:47:28.873785   44536 main.go:141] libmachine: (pause-848791) Calling .GetSSHKeyPath
	I0229 18:47:28.873870   44536 main.go:141] libmachine: (pause-848791) Calling .GetSSHKeyPath
	I0229 18:47:28.873984   44536 main.go:141] libmachine: (pause-848791) Calling .GetSSHUsername
	I0229 18:47:28.874051   44536 main.go:141] libmachine: (pause-848791) Calling .GetSSHUsername
	I0229 18:47:28.874119   44536 sshutil.go:53] new ssh client: &{IP:192.168.72.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/pause-848791/id_rsa Username:docker}
	I0229 18:47:28.874206   44536 sshutil.go:53] new ssh client: &{IP:192.168.72.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/pause-848791/id_rsa Username:docker}
	I0229 18:47:28.978934   44536 ssh_runner.go:195] Run: systemctl --version
	I0229 18:47:28.986710   44536 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0229 18:47:29.153441   44536 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 18:47:29.161741   44536 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 18:47:29.161806   44536 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 18:47:29.174387   44536 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0229 18:47:29.174427   44536 start.go:475] detecting cgroup driver to use...
	I0229 18:47:29.174490   44536 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 18:47:29.196088   44536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 18:47:29.213100   44536 docker.go:217] disabling cri-docker service (if available) ...
	I0229 18:47:29.213152   44536 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 18:47:29.228606   44536 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 18:47:29.244268   44536 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 18:47:29.437957   44536 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 18:47:29.716720   44536 docker.go:233] disabling docker service ...
	I0229 18:47:29.716800   44536 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 18:47:29.858729   44536 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 18:47:30.080888   44536 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 18:47:30.430034   44536 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 18:47:30.792972   44536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 18:47:30.887568   44536 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 18:47:30.911936   44536 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0229 18:47:30.912006   44536 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:47:30.925508   44536 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0229 18:47:30.925562   44536 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:47:30.940749   44536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:47:30.956037   44536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:47:30.969387   44536 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 18:47:30.983294   44536 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 18:47:31.015199   44536 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 18:47:31.034727   44536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:47:31.238169   44536 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0229 18:47:41.782863   44536 ssh_runner.go:235] Completed: sudo systemctl restart crio: (10.54464632s)
	I0229 18:47:41.782901   44536 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0229 18:47:41.782956   44536 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0229 18:47:41.790463   44536 start.go:543] Will wait 60s for crictl version
	I0229 18:47:41.790522   44536 ssh_runner.go:195] Run: which crictl
	I0229 18:47:41.795422   44536 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 18:47:41.838063   44536 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0229 18:47:41.838152   44536 ssh_runner.go:195] Run: crio --version
	I0229 18:47:41.881624   44536 ssh_runner.go:195] Run: crio --version
	I0229 18:47:41.934823   44536 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0229 18:47:41.936150   44536 main.go:141] libmachine: (pause-848791) Calling .GetIP
	I0229 18:47:41.939011   44536 main.go:141] libmachine: (pause-848791) DBG | domain pause-848791 has defined MAC address 52:54:00:00:ed:d1 in network mk-pause-848791
	I0229 18:47:41.939385   44536 main.go:141] libmachine: (pause-848791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:ed:d1", ip: ""} in network mk-pause-848791: {Iface:virbr1 ExpiryTime:2024-02-29 19:45:31 +0000 UTC Type:0 Mac:52:54:00:00:ed:d1 Iaid: IPaddr:192.168.72.95 Prefix:24 Hostname:pause-848791 Clientid:01:52:54:00:00:ed:d1}
	I0229 18:47:41.939414   44536 main.go:141] libmachine: (pause-848791) DBG | domain pause-848791 has defined IP address 192.168.72.95 and MAC address 52:54:00:00:ed:d1 in network mk-pause-848791
	I0229 18:47:41.939669   44536 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0229 18:47:41.944595   44536 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0229 18:47:41.944652   44536 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 18:47:41.992376   44536 crio.go:496] all images are preloaded for cri-o runtime.
	I0229 18:47:41.992409   44536 crio.go:415] Images already preloaded, skipping extraction
	I0229 18:47:41.992472   44536 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 18:47:42.042056   44536 crio.go:496] all images are preloaded for cri-o runtime.
	I0229 18:47:42.042081   44536 cache_images.go:84] Images are preloaded, skipping loading
	I0229 18:47:42.042153   44536 ssh_runner.go:195] Run: crio config
	I0229 18:47:42.102582   44536 cni.go:84] Creating CNI manager for ""
	I0229 18:47:42.102610   44536 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 18:47:42.102630   44536 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 18:47:42.102653   44536 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.95 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-848791 NodeName:pause-848791 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.95"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.95 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 18:47:42.102837   44536 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.95
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-848791"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.95
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.95"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 18:47:42.102938   44536 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=pause-848791 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.95
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:pause-848791 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 18:47:42.102988   44536 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0229 18:47:42.115794   44536 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 18:47:42.115893   44536 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 18:47:42.131393   44536 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (371 bytes)
	I0229 18:47:42.154110   44536 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 18:47:42.177720   44536 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0229 18:47:42.200196   44536 ssh_runner.go:195] Run: grep 192.168.72.95	control-plane.minikube.internal$ /etc/hosts
	I0229 18:47:42.204993   44536 certs.go:56] Setting up /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/pause-848791 for IP: 192.168.72.95
	I0229 18:47:42.205028   44536 certs.go:190] acquiring lock for shared ca certs: {Name:mk3505d2468da66ac20dc3ebf913782cecf1ba0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:47:42.205197   44536 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18259-6428/.minikube/ca.key
	I0229 18:47:42.205252   44536 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.key
	I0229 18:47:42.205339   44536 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/pause-848791/client.key
	I0229 18:47:42.205421   44536 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/pause-848791/apiserver.key.a2bb46f7
	I0229 18:47:42.205473   44536 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/pause-848791/proxy-client.key
	I0229 18:47:42.205618   44536 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651.pem (1338 bytes)
	W0229 18:47:42.205700   44536 certs.go:433] ignoring /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651_empty.pem, impossibly tiny 0 bytes
	I0229 18:47:42.205720   44536 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem (1675 bytes)
	I0229 18:47:42.205762   44536 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem (1082 bytes)
	I0229 18:47:42.205804   44536 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem (1123 bytes)
	I0229 18:47:42.205842   44536 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem (1675 bytes)
	I0229 18:47:42.205903   44536 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem (1708 bytes)
	I0229 18:47:42.206549   44536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/pause-848791/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 18:47:42.234511   44536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/pause-848791/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0229 18:47:42.262078   44536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/pause-848791/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 18:47:42.296178   44536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/pause-848791/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 18:47:42.329501   44536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 18:47:42.366245   44536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0229 18:47:42.398703   44536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 18:47:42.435375   44536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0229 18:47:42.473300   44536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651.pem --> /usr/share/ca-certificates/13651.pem (1338 bytes)
	I0229 18:47:42.512450   44536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem --> /usr/share/ca-certificates/136512.pem (1708 bytes)
	I0229 18:47:42.553924   44536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 18:47:42.587528   44536 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 18:47:42.610135   44536 ssh_runner.go:195] Run: openssl version
	I0229 18:47:42.618164   44536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136512.pem && ln -fs /usr/share/ca-certificates/136512.pem /etc/ssl/certs/136512.pem"
	I0229 18:47:42.632939   44536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136512.pem
	I0229 18:47:42.639509   44536 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 17:49 /usr/share/ca-certificates/136512.pem
	I0229 18:47:42.639575   44536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136512.pem
	I0229 18:47:42.646755   44536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136512.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 18:47:42.658658   44536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 18:47:42.674067   44536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:47:42.680008   44536 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 17:40 /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:47:42.680079   44536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:47:42.688240   44536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 18:47:42.703794   44536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13651.pem && ln -fs /usr/share/ca-certificates/13651.pem /etc/ssl/certs/13651.pem"
	I0229 18:47:42.717168   44536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13651.pem
	I0229 18:47:42.723492   44536 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 17:49 /usr/share/ca-certificates/13651.pem
	I0229 18:47:42.723556   44536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13651.pem
	I0229 18:47:42.730620   44536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13651.pem /etc/ssl/certs/51391683.0"
	I0229 18:47:42.743449   44536 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 18:47:42.748755   44536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0229 18:47:42.755756   44536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0229 18:47:42.762753   44536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0229 18:47:42.769999   44536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0229 18:47:42.778674   44536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0229 18:47:42.786421   44536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0229 18:47:42.793329   44536 kubeadm.go:404] StartCluster: {Name:pause-848791 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
28.4 ClusterName:pause-848791 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.95 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu
-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:47:42.793464   44536 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0229 18:47:42.793507   44536 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 18:47:42.856320   44536 cri.go:89] found id: "75d80eeba4737b9bd73fec56902d2fc18e6770ff5b7f6e9ee9c82acee198e7dc"
	I0229 18:47:42.856343   44536 cri.go:89] found id: "0dc0e68d780b68bbe86835c32ac6de7d8024a47c70a97609bcabe11af0b5c75d"
	I0229 18:47:42.856349   44536 cri.go:89] found id: "0293df94c7faabe36e668dea3a4c280fc5c47f3684ca2610f72a365d980c587d"
	I0229 18:47:42.856354   44536 cri.go:89] found id: "1698055d49d7942b06c62f78ab6d58bfe5a511ec064ec35566d3626dab70f969"
	I0229 18:47:42.856358   44536 cri.go:89] found id: "40864953bcc58e61ed4476305a2f44e9ed90ebc42bc9e7c965e252e5fd1d64be"
	I0229 18:47:42.856363   44536 cri.go:89] found id: "fcaeddb617b386f721fdbd313347a4c765b8337499ef9ddbc68ce341569f2fcf"
	I0229 18:47:42.856367   44536 cri.go:89] found id: "22c9473e86c1bef940d4cf48d0b178ba04bef0861cc79e0950da4b4be44646f3"
	I0229 18:47:42.856372   44536 cri.go:89] found id: "7a5e64d8a3b98047f63b42836798704515b09637ced90b3844a8947f18499665"
	I0229 18:47:42.856376   44536 cri.go:89] found id: "cb5cbcdc489fa660c739f680b79801282723f18547fec1e6f5969cd5cb6fe26d"
	I0229 18:47:42.856383   44536 cri.go:89] found id: "31ebc31abdf331c0c34818c989f20965e6b1174ace9831a2411baa5b0115dc8d"
	I0229 18:47:42.856387   44536 cri.go:89] found id: "13956bec06191a70201e94f12e67e9dd50ca122651f104946f720f92084b7b50"
	I0229 18:47:42.856391   44536 cri.go:89] found id: "f9a91c6cd727cb94e1694bf30fdffafed0030b312312032fea875d567b058469"
	I0229 18:47:42.856395   44536 cri.go:89] found id: ""
	I0229 18:47:42.856437   44536 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-848791 -n pause-848791
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-848791 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-848791 logs -n 25: (1.443581784s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                  Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-587185 sudo cat              | cilium-587185             | jenkins | v1.32.0 | 29 Feb 24 18:44 UTC |                     |
	|         | /lib/systemd/system/containerd.service |                           |         |         |                     |                     |
	| ssh     | -p cilium-587185 sudo cat              | cilium-587185             | jenkins | v1.32.0 | 29 Feb 24 18:44 UTC |                     |
	|         | /etc/containerd/config.toml            |                           |         |         |                     |                     |
	| ssh     | -p cilium-587185 sudo                  | cilium-587185             | jenkins | v1.32.0 | 29 Feb 24 18:44 UTC |                     |
	|         | containerd config dump                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-587185 sudo                  | cilium-587185             | jenkins | v1.32.0 | 29 Feb 24 18:44 UTC |                     |
	|         | systemctl status crio --all            |                           |         |         |                     |                     |
	|         | --full --no-pager                      |                           |         |         |                     |                     |
	| ssh     | -p cilium-587185 sudo                  | cilium-587185             | jenkins | v1.32.0 | 29 Feb 24 18:44 UTC |                     |
	|         | systemctl cat crio --no-pager          |                           |         |         |                     |                     |
	| ssh     | -p cilium-587185 sudo find             | cilium-587185             | jenkins | v1.32.0 | 29 Feb 24 18:44 UTC |                     |
	|         | /etc/crio -type f -exec sh -c          |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                   |                           |         |         |                     |                     |
	| ssh     | -p cilium-587185 sudo crio             | cilium-587185             | jenkins | v1.32.0 | 29 Feb 24 18:44 UTC |                     |
	|         | config                                 |                           |         |         |                     |                     |
	| delete  | -p cilium-587185                       | cilium-587185             | jenkins | v1.32.0 | 29 Feb 24 18:44 UTC | 29 Feb 24 18:44 UTC |
	| start   | -p pause-848791 --memory=2048          | pause-848791              | jenkins | v1.32.0 | 29 Feb 24 18:44 UTC | 29 Feb 24 18:46 UTC |
	|         | --install-addons=false                 |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2               |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-588905            | force-systemd-env-588905  | jenkins | v1.32.0 | 29 Feb 24 18:45 UTC | 29 Feb 24 18:45 UTC |
	| start   | -p cert-expiration-393248              | cert-expiration-393248    | jenkins | v1.32.0 | 29 Feb 24 18:45 UTC | 29 Feb 24 18:46 UTC |
	|         | --memory=2048                          |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                   |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-297898 ssh cat      | force-systemd-flag-297898 | jenkins | v1.32.0 | 29 Feb 24 18:45 UTC | 29 Feb 24 18:45 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-297898           | force-systemd-flag-297898 | jenkins | v1.32.0 | 29 Feb 24 18:45 UTC | 29 Feb 24 18:45 UTC |
	| start   | -p cert-options-009676                 | cert-options-009676       | jenkins | v1.32.0 | 29 Feb 24 18:45 UTC | 29 Feb 24 18:46 UTC |
	|         | --memory=2048                          |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1              |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15          |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost            |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com       |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-541086           | kubernetes-upgrade-541086 | jenkins | v1.32.0 | 29 Feb 24 18:45 UTC | 29 Feb 24 18:45 UTC |
	| start   | -p kubernetes-upgrade-541086           | kubernetes-upgrade-541086 | jenkins | v1.32.0 | 29 Feb 24 18:45 UTC | 29 Feb 24 18:47 UTC |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2      |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| ssh     | cert-options-009676 ssh                | cert-options-009676       | jenkins | v1.32.0 | 29 Feb 24 18:46 UTC | 29 Feb 24 18:46 UTC |
	|         | openssl x509 -text -noout -in          |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt  |                           |         |         |                     |                     |
	| ssh     | -p cert-options-009676 -- sudo         | cert-options-009676       | jenkins | v1.32.0 | 29 Feb 24 18:46 UTC | 29 Feb 24 18:46 UTC |
	|         | cat /etc/kubernetes/admin.conf         |                           |         |         |                     |                     |
	| delete  | -p cert-options-009676                 | cert-options-009676       | jenkins | v1.32.0 | 29 Feb 24 18:46 UTC | 29 Feb 24 18:46 UTC |
	| start   | -p old-k8s-version-631080              | old-k8s-version-631080    | jenkins | v1.32.0 | 29 Feb 24 18:46 UTC |                     |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true          |                           |         |         |                     |                     |
	|         | --kvm-network=default                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                |                           |         |         |                     |                     |
	|         | --keep-context=false                   |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0           |                           |         |         |                     |                     |
	| start   | -p pause-848791                        | pause-848791              | jenkins | v1.32.0 | 29 Feb 24 18:46 UTC | 29 Feb 24 18:48 UTC |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-541086           | kubernetes-upgrade-541086 | jenkins | v1.32.0 | 29 Feb 24 18:47 UTC |                     |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0           |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-541086           | kubernetes-upgrade-541086 | jenkins | v1.32.0 | 29 Feb 24 18:47 UTC | 29 Feb 24 18:47 UTC |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2      |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-541086           | kubernetes-upgrade-541086 | jenkins | v1.32.0 | 29 Feb 24 18:47 UTC | 29 Feb 24 18:47 UTC |
	| start   | -p no-preload-247197                   | no-preload-247197         | jenkins | v1.32.0 | 29 Feb 24 18:47 UTC |                     |
	|         | --memory=2200 --alsologtostderr        |                           |         |         |                     |                     |
	|         | --wait=true --preload=false            |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2      |                           |         |         |                     |                     |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 18:47:48
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 18:47:48.415405   45067 out.go:291] Setting OutFile to fd 1 ...
	I0229 18:47:48.415545   45067 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:47:48.415557   45067 out.go:304] Setting ErrFile to fd 2...
	I0229 18:47:48.415563   45067 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:47:48.415833   45067 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6428/.minikube/bin
	I0229 18:47:48.417027   45067 out.go:298] Setting JSON to false
	I0229 18:47:48.418894   45067 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5413,"bootTime":1709227056,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 18:47:48.418998   45067 start.go:139] virtualization: kvm guest
	I0229 18:47:48.421018   45067 out.go:177] * [no-preload-247197] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 18:47:48.423113   45067 out.go:177]   - MINIKUBE_LOCATION=18259
	I0229 18:47:48.423065   45067 notify.go:220] Checking for updates...
	I0229 18:47:48.424511   45067 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 18:47:48.426360   45067 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18259-6428/kubeconfig
	I0229 18:47:48.427744   45067 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6428/.minikube
	I0229 18:47:48.429066   45067 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 18:47:48.430617   45067 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 18:47:48.432588   45067 config.go:182] Loaded profile config "cert-expiration-393248": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 18:47:48.432780   45067 config.go:182] Loaded profile config "old-k8s-version-631080": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0229 18:47:48.432958   45067 config.go:182] Loaded profile config "pause-848791": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 18:47:48.433075   45067 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 18:47:48.474750   45067 out.go:177] * Using the kvm2 driver based on user configuration
	I0229 18:47:48.475999   45067 start.go:299] selected driver: kvm2
	I0229 18:47:48.476013   45067 start.go:903] validating driver "kvm2" against <nil>
	I0229 18:47:48.476025   45067 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 18:47:48.476762   45067 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:47:48.476833   45067 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18259-6428/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 18:47:48.492846   45067 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 18:47:48.492895   45067 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 18:47:48.493170   45067 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0229 18:47:48.493265   45067 cni.go:84] Creating CNI manager for ""
	I0229 18:47:48.493279   45067 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 18:47:48.493299   45067 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0229 18:47:48.493315   45067 start_flags.go:323] config:
	{Name:no-preload-247197 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-247197 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:47:48.493481   45067 iso.go:125] acquiring lock: {Name:mk4b55cee5696fff78a1389276e4e011ad655e56 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:47:48.495575   45067 out.go:177] * Starting control plane node no-preload-247197 in cluster no-preload-247197
	I0229 18:47:48.496916   45067 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0229 18:47:48.497064   45067 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/no-preload-247197/config.json ...
	I0229 18:47:48.497104   45067 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/no-preload-247197/config.json: {Name:mk7bd922f98febc92ac069a402760ec071d4e822 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:47:48.497213   45067 cache.go:107] acquiring lock: {Name:mk06b7fdf249210ec62788ccdafc872bcfcea452 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:47:48.497221   45067 cache.go:107] acquiring lock: {Name:mkae6606a1bf5cc34f8177d5b5bbc79dd658ace6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:47:48.497235   45067 cache.go:107] acquiring lock: {Name:mk60e308c69e43210797f13239849b555a97cc76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:47:48.497254   45067 cache.go:107] acquiring lock: {Name:mka04c760f627d3cb8a149022a1a807e9c41eca5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:47:48.497287   45067 cache.go:107] acquiring lock: {Name:mkbed3667a1fa6e9621d28444017016a6fd1a369 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:47:48.497305   45067 cache.go:115] /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0229 18:47:48.497309   45067 start.go:365] acquiring machines lock for no-preload-247197: {Name:mk08b242c6b6b7cd4a6843a7d3821a8b435540dd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 18:47:48.497319   45067 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 110.057µs
	I0229 18:47:48.497336   45067 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0229 18:47:48.497336   45067 cache.go:107] acquiring lock: {Name:mk19d8daa969d7d0f0327e27d7a7e329c82532be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:47:48.497352   45067 cache.go:107] acquiring lock: {Name:mk5c2bbd01fb2a58b3fa81ca9ef4e086ccb53efd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:47:48.497387   45067 start.go:369] acquired machines lock for "no-preload-247197" in 65.873µs
	I0229 18:47:48.497389   45067 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0229 18:47:48.497384   45067 cache.go:107] acquiring lock: {Name:mk1bac3238e53014886fa144bcfd676359aa3d56 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:47:48.497423   45067 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0229 18:47:48.497461   45067 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0229 18:47:48.497460   45067 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0229 18:47:48.497483   45067 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0229 18:47:48.497414   45067 start.go:93] Provisioning new machine with config: &{Name:no-preload-247197 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-247197 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0229 18:47:48.497562   45067 start.go:125] createHost starting for "" (driver="kvm2")
	I0229 18:47:48.497591   45067 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0229 18:47:48.713534   44536 api_server.go:279] https://192.168.72.95:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 18:47:48.713573   44536 retry.go:31] will retry after 243.225797ms: https://192.168.72.95:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 18:47:48.957051   44536 api_server.go:253] Checking apiserver healthz at https://192.168.72.95:8443/healthz ...
	I0229 18:47:48.965247   44536 api_server.go:279] https://192.168.72.95:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 18:47:48.965288   44536 retry.go:31] will retry after 373.516106ms: https://192.168.72.95:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 18:47:49.339549   44536 api_server.go:253] Checking apiserver healthz at https://192.168.72.95:8443/healthz ...
	I0229 18:47:49.344358   44536 api_server.go:279] https://192.168.72.95:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 18:47:49.344400   44536 retry.go:31] will retry after 482.004376ms: https://192.168.72.95:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 18:47:49.827047   44536 api_server.go:253] Checking apiserver healthz at https://192.168.72.95:8443/healthz ...
	I0229 18:47:49.836510   44536 api_server.go:279] https://192.168.72.95:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 18:47:49.836607   44536 retry.go:31] will retry after 502.036042ms: https://192.168.72.95:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 18:47:50.338789   44536 api_server.go:253] Checking apiserver healthz at https://192.168.72.95:8443/healthz ...
	I0229 18:47:50.348693   44536 api_server.go:279] https://192.168.72.95:8443/healthz returned 200:
	ok
	I0229 18:47:50.369721   44536 system_pods.go:86] 6 kube-system pods found
	I0229 18:47:50.369754   44536 system_pods.go:89] "coredns-5dd5756b68-h88pr" [dd96b56f-afb7-4472-b92a-2026983e58bd] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 18:47:50.369762   44536 system_pods.go:89] "etcd-pause-848791" [349e8342-f5e1-45b3-b817-238b70f5c18f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0229 18:47:50.369774   44536 system_pods.go:89] "kube-apiserver-pause-848791" [59f63a26-06ce-41f9-9773-a312615cd421] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0229 18:47:50.369785   44536 system_pods.go:89] "kube-controller-manager-pause-848791" [f1fc6c4e-a496-4094-b529-c1a0b010ad1d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0229 18:47:50.369789   44536 system_pods.go:89] "kube-proxy-l2m9f" [41adf7f1-0c82-4136-a271-819137db321b] Running
	I0229 18:47:50.369795   44536 system_pods.go:89] "kube-scheduler-pause-848791" [e4d9b180-cf2d-4347-be1a-93909a8988e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0229 18:47:50.370939   44536 api_server.go:141] control plane version: v1.28.4
	I0229 18:47:50.370963   44536 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.95
	I0229 18:47:50.370969   44536 kubeadm.go:684] Taking a shortcut, as the cluster seems to be properly configured
	I0229 18:47:50.370974   44536 kubeadm.go:640] restartCluster took 7.456363527s
	I0229 18:47:50.370980   44536 kubeadm.go:406] StartCluster complete in 7.577662782s
	I0229 18:47:50.370993   44536 settings.go:142] acquiring lock: {Name:mk2120f70b8c0f8e9d58905a579415af500b3723 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:47:50.371080   44536 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18259-6428/kubeconfig
	I0229 18:47:50.372031   44536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/kubeconfig: {Name:mk7125f243525b7f0feb85371d9c568ed8e0cf7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:47:50.372243   44536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 18:47:50.372370   44536 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 18:47:50.375060   44536 out.go:177] * Enabled addons: 
	I0229 18:47:50.372523   44536 config.go:182] Loaded profile config "pause-848791": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 18:47:50.373102   44536 kapi.go:59] client config for pause-848791: &rest.Config{Host:"https://192.168.72.95:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18259-6428/.minikube/profiles/pause-848791/client.crt", KeyFile:"/home/jenkins/minikube-integration/18259-6428/.minikube/profiles/pause-848791/client.key", CAFile:"/home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string
(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5d0e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 18:47:50.376410   44536 addons.go:505] enable addons completed in 4.044261ms: enabled=[]
	I0229 18:47:50.379950   44536 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-848791" context rescaled to 1 replicas
	I0229 18:47:50.380018   44536 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.95 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0229 18:47:50.381896   44536 out.go:177] * Verifying Kubernetes components...
	I0229 18:47:50.383348   44536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 18:47:50.502672   44536 node_ready.go:35] waiting up to 6m0s for node "pause-848791" to be "Ready" ...
	I0229 18:47:50.502936   44536 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0229 18:47:50.506512   44536 node_ready.go:49] node "pause-848791" has status "Ready":"True"
	I0229 18:47:50.506534   44536 node_ready.go:38] duration metric: took 3.833105ms waiting for node "pause-848791" to be "Ready" ...
	I0229 18:47:50.506545   44536 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 18:47:50.511840   44536 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-h88pr" in "kube-system" namespace to be "Ready" ...
	I0229 18:47:52.520802   44536 pod_ready.go:102] pod "coredns-5dd5756b68-h88pr" in "kube-system" namespace has status "Ready":"False"
	I0229 18:47:48.499985   45067 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0229 18:47:48.497716   45067 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0229 18:47:48.499149   45067 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0229 18:47:48.500165   45067 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 18:47:48.500197   45067 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:47:48.499163   45067 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0229 18:47:48.499182   45067 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0229 18:47:48.499176   45067 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0229 18:47:48.499189   45067 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0229 18:47:48.499305   45067 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0229 18:47:48.500902   45067 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0229 18:47:48.516908   45067 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33017
	I0229 18:47:48.517362   45067 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:47:48.517995   45067 main.go:141] libmachine: Using API Version  1
	I0229 18:47:48.518025   45067 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:47:48.518329   45067 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:47:48.518563   45067 main.go:141] libmachine: (no-preload-247197) Calling .GetMachineName
	I0229 18:47:48.518732   45067 main.go:141] libmachine: (no-preload-247197) Calling .DriverName
	I0229 18:47:48.518939   45067 start.go:159] libmachine.API.Create for "no-preload-247197" (driver="kvm2")
	I0229 18:47:48.519010   45067 client.go:168] LocalClient.Create starting
	I0229 18:47:48.519059   45067 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem
	I0229 18:47:48.519110   45067 main.go:141] libmachine: Decoding PEM data...
	I0229 18:47:48.519135   45067 main.go:141] libmachine: Parsing certificate...
	I0229 18:47:48.519200   45067 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem
	I0229 18:47:48.519225   45067 main.go:141] libmachine: Decoding PEM data...
	I0229 18:47:48.519237   45067 main.go:141] libmachine: Parsing certificate...
	I0229 18:47:48.519262   45067 main.go:141] libmachine: Running pre-create checks...
	I0229 18:47:48.519271   45067 main.go:141] libmachine: (no-preload-247197) Calling .PreCreateCheck
	I0229 18:47:48.519684   45067 main.go:141] libmachine: (no-preload-247197) Calling .GetConfigRaw
	I0229 18:47:48.520111   45067 main.go:141] libmachine: Creating machine...
	I0229 18:47:48.520127   45067 main.go:141] libmachine: (no-preload-247197) Calling .Create
	I0229 18:47:48.520273   45067 main.go:141] libmachine: (no-preload-247197) Creating KVM machine...
	I0229 18:47:48.521861   45067 main.go:141] libmachine: (no-preload-247197) DBG | found existing default KVM network
	I0229 18:47:48.523492   45067 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:47:48.523322   45089 network.go:212] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:26:45:ad} reservation:<nil>}
	I0229 18:47:48.524942   45067 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:47:48.524848   45089 network.go:207] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002889b0}
	I0229 18:47:48.531191   45067 main.go:141] libmachine: (no-preload-247197) DBG | trying to create private KVM network mk-no-preload-247197 192.168.50.0/24...
	I0229 18:47:48.613639   45067 main.go:141] libmachine: (no-preload-247197) DBG | private KVM network mk-no-preload-247197 192.168.50.0/24 created
	I0229 18:47:48.613683   45067 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:47:48.613620   45089 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18259-6428/.minikube
	I0229 18:47:48.613702   45067 main.go:141] libmachine: (no-preload-247197) Setting up store path in /home/jenkins/minikube-integration/18259-6428/.minikube/machines/no-preload-247197 ...
	I0229 18:47:48.613713   45067 main.go:141] libmachine: (no-preload-247197) Building disk image from file:///home/jenkins/minikube-integration/18259-6428/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso
	I0229 18:47:48.613787   45067 main.go:141] libmachine: (no-preload-247197) Downloading /home/jenkins/minikube-integration/18259-6428/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18259-6428/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0229 18:47:48.642384   45067 cache.go:162] opening:  /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9
	I0229 18:47:48.645997   45067 cache.go:162] opening:  /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0229 18:47:48.649979   45067 cache.go:162] opening:  /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0229 18:47:48.655461   45067 cache.go:162] opening:  /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0229 18:47:48.657361   45067 cache.go:162] opening:  /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0229 18:47:48.680883   45067 cache.go:162] opening:  /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0229 18:47:48.711682   45067 cache.go:162] opening:  /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0229 18:47:48.717916   45067 cache.go:157] /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 exists
	I0229 18:47:48.717943   45067 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9" took 220.609833ms
	I0229 18:47:48.717959   45067 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 succeeded
	I0229 18:47:48.846558   45067 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:47:48.846469   45089 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/no-preload-247197/id_rsa...
	I0229 18:47:49.228895   45067 cache.go:157] /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 exists
	I0229 18:47:49.228924   45067 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2" took 731.71791ms
	I0229 18:47:49.228942   45067 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 succeeded
	I0229 18:47:49.264564   45067 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:47:49.264450   45089 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/no-preload-247197/no-preload-247197.rawdisk...
	I0229 18:47:49.264591   45067 main.go:141] libmachine: (no-preload-247197) DBG | Writing magic tar header
	I0229 18:47:49.264611   45067 main.go:141] libmachine: (no-preload-247197) DBG | Writing SSH key tar header
	I0229 18:47:49.264629   45067 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:47:49.264594   45089 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18259-6428/.minikube/machines/no-preload-247197 ...
	I0229 18:47:49.264722   45067 main.go:141] libmachine: (no-preload-247197) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/no-preload-247197
	I0229 18:47:49.264756   45067 main.go:141] libmachine: (no-preload-247197) Setting executable bit set on /home/jenkins/minikube-integration/18259-6428/.minikube/machines/no-preload-247197 (perms=drwx------)
	I0229 18:47:49.264768   45067 main.go:141] libmachine: (no-preload-247197) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6428/.minikube/machines
	I0229 18:47:49.264787   45067 main.go:141] libmachine: (no-preload-247197) Setting executable bit set on /home/jenkins/minikube-integration/18259-6428/.minikube/machines (perms=drwxr-xr-x)
	I0229 18:47:49.264803   45067 main.go:141] libmachine: (no-preload-247197) Setting executable bit set on /home/jenkins/minikube-integration/18259-6428/.minikube (perms=drwxr-xr-x)
	I0229 18:47:49.264812   45067 main.go:141] libmachine: (no-preload-247197) Setting executable bit set on /home/jenkins/minikube-integration/18259-6428 (perms=drwxrwxr-x)
	I0229 18:47:49.264819   45067 main.go:141] libmachine: (no-preload-247197) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0229 18:47:49.264840   45067 main.go:141] libmachine: (no-preload-247197) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0229 18:47:49.264855   45067 main.go:141] libmachine: (no-preload-247197) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6428/.minikube
	I0229 18:47:49.264866   45067 main.go:141] libmachine: (no-preload-247197) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6428
	I0229 18:47:49.264880   45067 main.go:141] libmachine: (no-preload-247197) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0229 18:47:49.264893   45067 main.go:141] libmachine: (no-preload-247197) DBG | Checking permissions on dir: /home/jenkins
	I0229 18:47:49.264905   45067 main.go:141] libmachine: (no-preload-247197) DBG | Checking permissions on dir: /home
	I0229 18:47:49.264913   45067 main.go:141] libmachine: (no-preload-247197) Creating domain...
	I0229 18:47:49.264930   45067 main.go:141] libmachine: (no-preload-247197) DBG | Skipping /home - not owner
	I0229 18:47:49.266194   45067 main.go:141] libmachine: (no-preload-247197) define libvirt domain using xml: 
	I0229 18:47:49.266226   45067 main.go:141] libmachine: (no-preload-247197) <domain type='kvm'>
	I0229 18:47:49.266233   45067 main.go:141] libmachine: (no-preload-247197)   <name>no-preload-247197</name>
	I0229 18:47:49.266245   45067 main.go:141] libmachine: (no-preload-247197)   <memory unit='MiB'>2200</memory>
	I0229 18:47:49.266280   45067 main.go:141] libmachine: (no-preload-247197)   <vcpu>2</vcpu>
	I0229 18:47:49.266308   45067 main.go:141] libmachine: (no-preload-247197)   <features>
	I0229 18:47:49.266334   45067 main.go:141] libmachine: (no-preload-247197)     <acpi/>
	I0229 18:47:49.266360   45067 main.go:141] libmachine: (no-preload-247197)     <apic/>
	I0229 18:47:49.266367   45067 main.go:141] libmachine: (no-preload-247197)     <pae/>
	I0229 18:47:49.266377   45067 main.go:141] libmachine: (no-preload-247197)     
	I0229 18:47:49.266392   45067 main.go:141] libmachine: (no-preload-247197)   </features>
	I0229 18:47:49.266401   45067 main.go:141] libmachine: (no-preload-247197)   <cpu mode='host-passthrough'>
	I0229 18:47:49.266412   45067 main.go:141] libmachine: (no-preload-247197)   
	I0229 18:47:49.266421   45067 main.go:141] libmachine: (no-preload-247197)   </cpu>
	I0229 18:47:49.266432   45067 main.go:141] libmachine: (no-preload-247197)   <os>
	I0229 18:47:49.266443   45067 main.go:141] libmachine: (no-preload-247197)     <type>hvm</type>
	I0229 18:47:49.266451   45067 main.go:141] libmachine: (no-preload-247197)     <boot dev='cdrom'/>
	I0229 18:47:49.266466   45067 main.go:141] libmachine: (no-preload-247197)     <boot dev='hd'/>
	I0229 18:47:49.266487   45067 main.go:141] libmachine: (no-preload-247197)     <bootmenu enable='no'/>
	I0229 18:47:49.266503   45067 main.go:141] libmachine: (no-preload-247197)   </os>
	I0229 18:47:49.266515   45067 main.go:141] libmachine: (no-preload-247197)   <devices>
	I0229 18:47:49.266534   45067 main.go:141] libmachine: (no-preload-247197)     <disk type='file' device='cdrom'>
	I0229 18:47:49.266552   45067 main.go:141] libmachine: (no-preload-247197)       <source file='/home/jenkins/minikube-integration/18259-6428/.minikube/machines/no-preload-247197/boot2docker.iso'/>
	I0229 18:47:49.266568   45067 main.go:141] libmachine: (no-preload-247197)       <target dev='hdc' bus='scsi'/>
	I0229 18:47:49.266579   45067 main.go:141] libmachine: (no-preload-247197)       <readonly/>
	I0229 18:47:49.266594   45067 main.go:141] libmachine: (no-preload-247197)     </disk>
	I0229 18:47:49.266608   45067 main.go:141] libmachine: (no-preload-247197)     <disk type='file' device='disk'>
	I0229 18:47:49.266619   45067 main.go:141] libmachine: (no-preload-247197)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0229 18:47:49.266636   45067 main.go:141] libmachine: (no-preload-247197)       <source file='/home/jenkins/minikube-integration/18259-6428/.minikube/machines/no-preload-247197/no-preload-247197.rawdisk'/>
	I0229 18:47:49.266648   45067 main.go:141] libmachine: (no-preload-247197)       <target dev='hda' bus='virtio'/>
	I0229 18:47:49.266659   45067 main.go:141] libmachine: (no-preload-247197)     </disk>
	I0229 18:47:49.266677   45067 main.go:141] libmachine: (no-preload-247197)     <interface type='network'>
	I0229 18:47:49.266693   45067 main.go:141] libmachine: (no-preload-247197)       <source network='mk-no-preload-247197'/>
	I0229 18:47:49.266706   45067 main.go:141] libmachine: (no-preload-247197)       <model type='virtio'/>
	I0229 18:47:49.266716   45067 main.go:141] libmachine: (no-preload-247197)     </interface>
	I0229 18:47:49.266728   45067 main.go:141] libmachine: (no-preload-247197)     <interface type='network'>
	I0229 18:47:49.266748   45067 main.go:141] libmachine: (no-preload-247197)       <source network='default'/>
	I0229 18:47:49.266759   45067 main.go:141] libmachine: (no-preload-247197)       <model type='virtio'/>
	I0229 18:47:49.266773   45067 main.go:141] libmachine: (no-preload-247197)     </interface>
	I0229 18:47:49.266798   45067 main.go:141] libmachine: (no-preload-247197)     <serial type='pty'>
	I0229 18:47:49.266815   45067 main.go:141] libmachine: (no-preload-247197)       <target port='0'/>
	I0229 18:47:49.266847   45067 main.go:141] libmachine: (no-preload-247197)     </serial>
	I0229 18:47:49.266865   45067 main.go:141] libmachine: (no-preload-247197)     <console type='pty'>
	I0229 18:47:49.266878   45067 main.go:141] libmachine: (no-preload-247197)       <target type='serial' port='0'/>
	I0229 18:47:49.266888   45067 main.go:141] libmachine: (no-preload-247197)     </console>
	I0229 18:47:49.266901   45067 main.go:141] libmachine: (no-preload-247197)     <rng model='virtio'>
	I0229 18:47:49.266919   45067 main.go:141] libmachine: (no-preload-247197)       <backend model='random'>/dev/random</backend>
	I0229 18:47:49.266943   45067 main.go:141] libmachine: (no-preload-247197)     </rng>
	I0229 18:47:49.266958   45067 main.go:141] libmachine: (no-preload-247197)     
	I0229 18:47:49.266974   45067 main.go:141] libmachine: (no-preload-247197)     
	I0229 18:47:49.267000   45067 main.go:141] libmachine: (no-preload-247197)   </devices>
	I0229 18:47:49.267012   45067 main.go:141] libmachine: (no-preload-247197) </domain>
	I0229 18:47:49.267028   45067 main.go:141] libmachine: (no-preload-247197) 
	I0229 18:47:49.271064   45067 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:3e:dc:25 in network default
	I0229 18:47:49.271656   45067 main.go:141] libmachine: (no-preload-247197) Ensuring networks are active...
	I0229 18:47:49.271681   45067 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:47:49.272432   45067 main.go:141] libmachine: (no-preload-247197) Ensuring network default is active
	I0229 18:47:49.272762   45067 main.go:141] libmachine: (no-preload-247197) Ensuring network mk-no-preload-247197 is active
	I0229 18:47:49.273295   45067 main.go:141] libmachine: (no-preload-247197) Getting domain xml...
	I0229 18:47:49.274016   45067 main.go:141] libmachine: (no-preload-247197) Creating domain...
	I0229 18:47:49.827306   45067 cache.go:157] /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 exists
	I0229 18:47:49.827332   45067 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2" took 1.329948794s
	I0229 18:47:49.827346   45067 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 succeeded
	I0229 18:47:49.908648   45067 cache.go:157] /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 exists
	I0229 18:47:49.908677   45067 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2" took 1.411326981s
	I0229 18:47:49.908692   45067 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 succeeded
	I0229 18:47:49.970733   45067 cache.go:157] /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 exists
	I0229 18:47:49.970756   45067 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2" took 1.473543172s
	I0229 18:47:49.970767   45067 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 succeeded
	I0229 18:47:50.081914   45067 cache.go:157] /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0229 18:47:50.081950   45067 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1" took 1.58466357s
	I0229 18:47:50.081974   45067 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0229 18:47:50.282837   45067 cache.go:157] /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 exists
	I0229 18:47:50.282878   45067 cache.go:96] cache image "registry.k8s.io/etcd:3.5.10-0" -> "/home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0" took 1.785652057s
	I0229 18:47:50.282889   45067 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.10-0 -> /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 succeeded
	I0229 18:47:50.282907   45067 cache.go:87] Successfully saved all images to host disk.
	I0229 18:47:50.683510   45067 main.go:141] libmachine: (no-preload-247197) Waiting to get IP...
	I0229 18:47:50.684398   45067 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:47:50.684932   45067 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:47:50.684958   45067 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:47:50.684890   45089 retry.go:31] will retry after 298.188719ms: waiting for machine to come up
	I0229 18:47:50.985259   45067 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:47:50.985811   45067 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:47:50.985859   45067 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:47:50.985749   45089 retry.go:31] will retry after 383.646786ms: waiting for machine to come up
	I0229 18:47:51.371381   45067 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:47:51.371809   45067 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:47:51.371851   45067 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:47:51.371769   45089 retry.go:31] will retry after 327.67165ms: waiting for machine to come up
	I0229 18:47:51.701181   45067 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:47:51.701723   45067 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:47:51.701771   45067 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:47:51.701665   45089 retry.go:31] will retry after 532.283305ms: waiting for machine to come up
	I0229 18:47:52.235358   45067 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:47:52.235744   45067 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:47:52.235781   45067 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:47:52.235696   45089 retry.go:31] will retry after 497.785715ms: waiting for machine to come up
	I0229 18:47:52.735505   45067 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:47:52.735979   45067 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:47:52.736006   45067 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:47:52.735951   45089 retry.go:31] will retry after 636.91864ms: waiting for machine to come up
	I0229 18:47:53.374979   45067 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:47:53.375506   45067 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:47:53.375534   45067 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:47:53.375463   45089 retry.go:31] will retry after 875.964934ms: waiting for machine to come up
	I0229 18:47:55.021818   44536 pod_ready.go:102] pod "coredns-5dd5756b68-h88pr" in "kube-system" namespace has status "Ready":"False"
	I0229 18:47:57.519355   44536 pod_ready.go:102] pod "coredns-5dd5756b68-h88pr" in "kube-system" namespace has status "Ready":"False"
	I0229 18:47:54.252800   45067 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:47:54.253319   45067 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:47:54.253344   45067 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:47:54.253282   45089 retry.go:31] will retry after 1.430919856s: waiting for machine to come up
	I0229 18:47:55.685937   45067 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:47:55.686401   45067 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:47:55.686446   45067 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:47:55.686358   45089 retry.go:31] will retry after 1.218031611s: waiting for machine to come up
	I0229 18:47:56.905950   45067 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:47:56.906693   45067 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:47:56.906720   45067 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:47:56.906634   45089 retry.go:31] will retry after 1.803107669s: waiting for machine to come up
	I0229 18:47:59.520240   44536 pod_ready.go:102] pod "coredns-5dd5756b68-h88pr" in "kube-system" namespace has status "Ready":"False"
	I0229 18:48:01.520585   44536 pod_ready.go:102] pod "coredns-5dd5756b68-h88pr" in "kube-system" namespace has status "Ready":"False"
	I0229 18:47:58.711609   45067 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:47:58.712108   45067 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:47:58.712142   45067 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:47:58.712079   45089 retry.go:31] will retry after 2.104573546s: waiting for machine to come up
	I0229 18:48:00.818388   45067 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:48:00.818882   45067 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:48:00.818906   45067 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:48:00.818833   45089 retry.go:31] will retry after 2.372598202s: waiting for machine to come up
	I0229 18:48:03.194205   45067 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:48:03.194608   45067 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:48:03.194638   45067 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:48:03.194555   45089 retry.go:31] will retry after 3.060606768s: waiting for machine to come up
	I0229 18:48:03.520673   44536 pod_ready.go:92] pod "coredns-5dd5756b68-h88pr" in "kube-system" namespace has status "Ready":"True"
	I0229 18:48:03.520696   44536 pod_ready.go:81] duration metric: took 13.008837482s waiting for pod "coredns-5dd5756b68-h88pr" in "kube-system" namespace to be "Ready" ...
	I0229 18:48:03.520707   44536 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-848791" in "kube-system" namespace to be "Ready" ...
	I0229 18:48:03.526448   44536 pod_ready.go:92] pod "etcd-pause-848791" in "kube-system" namespace has status "Ready":"True"
	I0229 18:48:03.526470   44536 pod_ready.go:81] duration metric: took 5.756933ms waiting for pod "etcd-pause-848791" in "kube-system" namespace to be "Ready" ...
	I0229 18:48:03.526479   44536 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-848791" in "kube-system" namespace to be "Ready" ...
	I0229 18:48:05.038466   44536 pod_ready.go:92] pod "kube-apiserver-pause-848791" in "kube-system" namespace has status "Ready":"True"
	I0229 18:48:05.038490   44536 pod_ready.go:81] duration metric: took 1.512003862s waiting for pod "kube-apiserver-pause-848791" in "kube-system" namespace to be "Ready" ...
	I0229 18:48:05.038499   44536 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-848791" in "kube-system" namespace to be "Ready" ...
	I0229 18:48:05.045588   44536 pod_ready.go:92] pod "kube-controller-manager-pause-848791" in "kube-system" namespace has status "Ready":"True"
	I0229 18:48:05.045615   44536 pod_ready.go:81] duration metric: took 7.108652ms waiting for pod "kube-controller-manager-pause-848791" in "kube-system" namespace to be "Ready" ...
	I0229 18:48:05.045628   44536 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-l2m9f" in "kube-system" namespace to be "Ready" ...
	I0229 18:48:05.056050   44536 pod_ready.go:92] pod "kube-proxy-l2m9f" in "kube-system" namespace has status "Ready":"True"
	I0229 18:48:05.056072   44536 pod_ready.go:81] duration metric: took 10.437029ms waiting for pod "kube-proxy-l2m9f" in "kube-system" namespace to be "Ready" ...
	I0229 18:48:05.056081   44536 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-848791" in "kube-system" namespace to be "Ready" ...
	I0229 18:48:05.118583   44536 pod_ready.go:92] pod "kube-scheduler-pause-848791" in "kube-system" namespace has status "Ready":"True"
	I0229 18:48:05.118605   44536 pod_ready.go:81] duration metric: took 62.517974ms waiting for pod "kube-scheduler-pause-848791" in "kube-system" namespace to be "Ready" ...
	I0229 18:48:05.118613   44536 pod_ready.go:38] duration metric: took 14.612056214s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 18:48:05.118628   44536 api_server.go:52] waiting for apiserver process to appear ...
	I0229 18:48:05.118673   44536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:48:05.146323   44536 api_server.go:72] duration metric: took 14.766251771s to wait for apiserver process to appear ...
	I0229 18:48:05.146354   44536 api_server.go:88] waiting for apiserver healthz status ...
	I0229 18:48:05.146375   44536 api_server.go:253] Checking apiserver healthz at https://192.168.72.95:8443/healthz ...
	I0229 18:48:05.154472   44536 api_server.go:279] https://192.168.72.95:8443/healthz returned 200:
	ok
	I0229 18:48:05.158525   44536 api_server.go:141] control plane version: v1.28.4
	I0229 18:48:05.158551   44536 api_server.go:131] duration metric: took 12.188675ms to wait for apiserver health ...
	I0229 18:48:05.158562   44536 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 18:48:05.320621   44536 system_pods.go:59] 6 kube-system pods found
	I0229 18:48:05.320648   44536 system_pods.go:61] "coredns-5dd5756b68-h88pr" [dd96b56f-afb7-4472-b92a-2026983e58bd] Running
	I0229 18:48:05.320654   44536 system_pods.go:61] "etcd-pause-848791" [349e8342-f5e1-45b3-b817-238b70f5c18f] Running
	I0229 18:48:05.320659   44536 system_pods.go:61] "kube-apiserver-pause-848791" [59f63a26-06ce-41f9-9773-a312615cd421] Running
	I0229 18:48:05.320664   44536 system_pods.go:61] "kube-controller-manager-pause-848791" [f1fc6c4e-a496-4094-b529-c1a0b010ad1d] Running
	I0229 18:48:05.320677   44536 system_pods.go:61] "kube-proxy-l2m9f" [41adf7f1-0c82-4136-a271-819137db321b] Running
	I0229 18:48:05.320682   44536 system_pods.go:61] "kube-scheduler-pause-848791" [e4d9b180-cf2d-4347-be1a-93909a8988e3] Running
	I0229 18:48:05.320688   44536 system_pods.go:74] duration metric: took 162.119764ms to wait for pod list to return data ...
	I0229 18:48:05.320698   44536 default_sa.go:34] waiting for default service account to be created ...
	I0229 18:48:05.516725   44536 default_sa.go:45] found service account: "default"
	I0229 18:48:05.516750   44536 default_sa.go:55] duration metric: took 196.045292ms for default service account to be created ...
	I0229 18:48:05.516760   44536 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 18:48:05.719711   44536 system_pods.go:86] 6 kube-system pods found
	I0229 18:48:05.719742   44536 system_pods.go:89] "coredns-5dd5756b68-h88pr" [dd96b56f-afb7-4472-b92a-2026983e58bd] Running
	I0229 18:48:05.719748   44536 system_pods.go:89] "etcd-pause-848791" [349e8342-f5e1-45b3-b817-238b70f5c18f] Running
	I0229 18:48:05.719753   44536 system_pods.go:89] "kube-apiserver-pause-848791" [59f63a26-06ce-41f9-9773-a312615cd421] Running
	I0229 18:48:05.719760   44536 system_pods.go:89] "kube-controller-manager-pause-848791" [f1fc6c4e-a496-4094-b529-c1a0b010ad1d] Running
	I0229 18:48:05.719766   44536 system_pods.go:89] "kube-proxy-l2m9f" [41adf7f1-0c82-4136-a271-819137db321b] Running
	I0229 18:48:05.719772   44536 system_pods.go:89] "kube-scheduler-pause-848791" [e4d9b180-cf2d-4347-be1a-93909a8988e3] Running
	I0229 18:48:05.719781   44536 system_pods.go:126] duration metric: took 203.014596ms to wait for k8s-apps to be running ...
	I0229 18:48:05.719791   44536 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 18:48:05.719841   44536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 18:48:05.737449   44536 system_svc.go:56] duration metric: took 17.647432ms WaitForService to wait for kubelet.
	I0229 18:48:05.737482   44536 kubeadm.go:581] duration metric: took 15.357414146s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 18:48:05.737502   44536 node_conditions.go:102] verifying NodePressure condition ...
	I0229 18:48:05.918606   44536 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 18:48:05.918640   44536 node_conditions.go:123] node cpu capacity is 2
	I0229 18:48:05.918654   44536 node_conditions.go:105] duration metric: took 181.146465ms to run NodePressure ...
	I0229 18:48:05.918668   44536 start.go:228] waiting for startup goroutines ...
	I0229 18:48:05.918698   44536 start.go:233] waiting for cluster config update ...
	I0229 18:48:05.918712   44536 start.go:242] writing updated cluster config ...
	I0229 18:48:05.919051   44536 ssh_runner.go:195] Run: rm -f paused
	I0229 18:48:05.973608   44536 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0229 18:48:05.975815   44536 out.go:177] * Done! kubectl is now configured to use "pause-848791" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Feb 29 18:48:06 pause-848791 crio[2547]: time="2024-02-29 18:48:06.657964890Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709232486657931749,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c321917c-8cab-4713-9c0a-0a50cfe53a69 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 18:48:06 pause-848791 crio[2547]: time="2024-02-29 18:48:06.663925861Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b0860e41-53aa-458e-bb0d-4497bdfe0f78 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 18:48:06 pause-848791 crio[2547]: time="2024-02-29 18:48:06.664015838Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b0860e41-53aa-458e-bb0d-4497bdfe0f78 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 18:48:06 pause-848791 crio[2547]: time="2024-02-29 18:48:06.664347796Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cbe3e63c84bcdfc2a9e32f8b51b9563e5c2ba10cdb986e775bae6f10e977eb65,PodSandboxId:d6910a43eebae742920cb0842f6ac6629db9afc827a238fd5298d74ed20edabb,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709232481947834026,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-h88pr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd96b56f-afb7-4472-b92a-2026983e58bd,},Annotations:map[string]string{io.kubernetes.container.hash: dccff3c2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa631b99c5f8167398dcd7410ce8a8ba4cebf7088379b12a7dffa5e5d6d12a58,PodSandboxId:33bacdf5d10186ed0411cf986be347f987c657b3a743a387f5cc09e8687a1f6a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709232465170075379,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-848791,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: 6eb35ebc21e5130b09eb73823cab2d15,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:996c3c479f668777630d386ba70c091466ebe2d267160ed282e9361696cbffcf,PodSandboxId:3bfd52160851e38bc759a337ad977effdce3e397e10accac90b18599be07e815,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709232465257748661,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-848791,io.kubernetes.pod.namespace: kub
e-system,io.kubernetes.pod.uid: e52a40e76baf97a307c38f1a6ffe05c5,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50ebd4f90068611d5c0c2eafe3a7e1b4a0e88163f29af55dd0284113aec8522e,PodSandboxId:5b83cb3b60fd48edf78ae8b5c93f79f97a148c93e342a3654d1922137cd02fc1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709232465211642496,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-848791,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 79d3ca487c0ab7d16b95c0911752c3c9,},Annotations:map[string]string{io.kubernetes.container.hash: 35482996,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52e60b520e4df16de45d7213e50215a6cd3736d514def230d116774ba4b875f9,PodSandboxId:1252249eb3f9cb4129ef4c27e2fb358b775778cab89745243ee62925c26c3a46,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709232464897420495,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l2m9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41adf7f1-0c82-4136-a271
-819137db321b,},Annotations:map[string]string{io.kubernetes.container.hash: 971943ed,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c3841f85eb528d7a7eadf7119115c698c72e631581d922d2b823180d9fed894,PodSandboxId:e539b028e8a9d176940ad917461de7993d8c086422b6778b52c21f0e640c816e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709232465160034681,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-848791,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 048336b3943725df307a6a6dcf28ff99,},Annotations:map[string]string{io
.kubernetes.container.hash: 9b7688fe,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75d80eeba4737b9bd73fec56902d2fc18e6770ff5b7f6e9ee9c82acee198e7dc,PodSandboxId:8ac8a124adbd11fdd69633e3f4a64dfbb884729ec3d083426ea37ea367c2d0d7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1709232450816429952,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-h88pr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd96b56f-afb7-4472-b92a-2026983e58bd,},Annotations:map[string]string{io.kubernetes.container.hash: dccf
f3c2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0dc0e68d780b68bbe86835c32ac6de7d8024a47c70a97609bcabe11af0b5c75d,PodSandboxId:ee2ecf3c9f0d1e4c06552ff7a5c0e154d561deff4e2f90956b073d4f234810a5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1709232450264796874,Labels:map[string]string{io.kubernetes.container.name: kube-controlle
r-manager,io.kubernetes.pod.name: kube-controller-manager-pause-848791,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6eb35ebc21e5130b09eb73823cab2d15,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40864953bcc58e61ed4476305a2f44e9ed90ebc42bc9e7c965e252e5fd1d64be,PodSandboxId:63df0d4c45c21cc7cd3952105dc85270df81d19a2a6116e9d5c3a43b3c41d9dc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1709232450073822218,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.na
me: etcd-pause-848791,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 048336b3943725df307a6a6dcf28ff99,},Annotations:map[string]string{io.kubernetes.container.hash: 9b7688fe,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0293df94c7faabe36e668dea3a4c280fc5c47f3684ca2610f72a365d980c587d,PodSandboxId:e9c962de8f376e49e0d838257e1233645c3d925aee2f7c4c14110b247eedddce,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1709232450177858530,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-8487
91,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79d3ca487c0ab7d16b95c0911752c3c9,},Annotations:map[string]string{io.kubernetes.container.hash: 35482996,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1698055d49d7942b06c62f78ab6d58bfe5a511ec064ec35566d3626dab70f969,PodSandboxId:bf1c2ad33f3b913ad156869f92db2b8b8ced421cf284518812588d3607e2f625,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1709232450104417653,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-848791,io.kubernetes.po
d.namespace: kube-system,io.kubernetes.pod.uid: e52a40e76baf97a307c38f1a6ffe05c5,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcaeddb617b386f721fdbd313347a4c765b8337499ef9ddbc68ce341569f2fcf,PodSandboxId:179a79775293c5b3bbc399bcc74306563e986a623e4707b8ede21bc21efa9973,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1709232450029518679,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l2m9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 41adf7f1-0c82-4136-a271-819137db321b,},Annotations:map[string]string{io.kubernetes.container.hash: 971943ed,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b0860e41-53aa-458e-bb0d-4497bdfe0f78 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 18:48:06 pause-848791 crio[2547]: time="2024-02-29 18:48:06.716404592Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3798c4a1-47da-4a8d-abb6-f7a6834680da name=/runtime.v1.RuntimeService/Version
	Feb 29 18:48:06 pause-848791 crio[2547]: time="2024-02-29 18:48:06.716479189Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3798c4a1-47da-4a8d-abb6-f7a6834680da name=/runtime.v1.RuntimeService/Version
	Feb 29 18:48:06 pause-848791 crio[2547]: time="2024-02-29 18:48:06.717878425Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=995b1f1e-447c-4f2a-acf9-b880afab4bb2 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 18:48:06 pause-848791 crio[2547]: time="2024-02-29 18:48:06.718395175Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709232486718367395,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=995b1f1e-447c-4f2a-acf9-b880afab4bb2 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 18:48:06 pause-848791 crio[2547]: time="2024-02-29 18:48:06.719520089Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ac09d5dc-a670-4d30-8a3e-eb2e19e5af78 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 18:48:06 pause-848791 crio[2547]: time="2024-02-29 18:48:06.719692845Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ac09d5dc-a670-4d30-8a3e-eb2e19e5af78 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 18:48:06 pause-848791 crio[2547]: time="2024-02-29 18:48:06.720029668Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cbe3e63c84bcdfc2a9e32f8b51b9563e5c2ba10cdb986e775bae6f10e977eb65,PodSandboxId:d6910a43eebae742920cb0842f6ac6629db9afc827a238fd5298d74ed20edabb,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709232481947834026,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-h88pr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd96b56f-afb7-4472-b92a-2026983e58bd,},Annotations:map[string]string{io.kubernetes.container.hash: dccff3c2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa631b99c5f8167398dcd7410ce8a8ba4cebf7088379b12a7dffa5e5d6d12a58,PodSandboxId:33bacdf5d10186ed0411cf986be347f987c657b3a743a387f5cc09e8687a1f6a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709232465170075379,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-848791,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: 6eb35ebc21e5130b09eb73823cab2d15,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:996c3c479f668777630d386ba70c091466ebe2d267160ed282e9361696cbffcf,PodSandboxId:3bfd52160851e38bc759a337ad977effdce3e397e10accac90b18599be07e815,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709232465257748661,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-848791,io.kubernetes.pod.namespace: kub
e-system,io.kubernetes.pod.uid: e52a40e76baf97a307c38f1a6ffe05c5,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50ebd4f90068611d5c0c2eafe3a7e1b4a0e88163f29af55dd0284113aec8522e,PodSandboxId:5b83cb3b60fd48edf78ae8b5c93f79f97a148c93e342a3654d1922137cd02fc1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709232465211642496,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-848791,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 79d3ca487c0ab7d16b95c0911752c3c9,},Annotations:map[string]string{io.kubernetes.container.hash: 35482996,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52e60b520e4df16de45d7213e50215a6cd3736d514def230d116774ba4b875f9,PodSandboxId:1252249eb3f9cb4129ef4c27e2fb358b775778cab89745243ee62925c26c3a46,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709232464897420495,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l2m9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41adf7f1-0c82-4136-a271
-819137db321b,},Annotations:map[string]string{io.kubernetes.container.hash: 971943ed,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c3841f85eb528d7a7eadf7119115c698c72e631581d922d2b823180d9fed894,PodSandboxId:e539b028e8a9d176940ad917461de7993d8c086422b6778b52c21f0e640c816e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709232465160034681,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-848791,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 048336b3943725df307a6a6dcf28ff99,},Annotations:map[string]string{io
.kubernetes.container.hash: 9b7688fe,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75d80eeba4737b9bd73fec56902d2fc18e6770ff5b7f6e9ee9c82acee198e7dc,PodSandboxId:8ac8a124adbd11fdd69633e3f4a64dfbb884729ec3d083426ea37ea367c2d0d7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1709232450816429952,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-h88pr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd96b56f-afb7-4472-b92a-2026983e58bd,},Annotations:map[string]string{io.kubernetes.container.hash: dccf
f3c2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0dc0e68d780b68bbe86835c32ac6de7d8024a47c70a97609bcabe11af0b5c75d,PodSandboxId:ee2ecf3c9f0d1e4c06552ff7a5c0e154d561deff4e2f90956b073d4f234810a5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1709232450264796874,Labels:map[string]string{io.kubernetes.container.name: kube-controlle
r-manager,io.kubernetes.pod.name: kube-controller-manager-pause-848791,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6eb35ebc21e5130b09eb73823cab2d15,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40864953bcc58e61ed4476305a2f44e9ed90ebc42bc9e7c965e252e5fd1d64be,PodSandboxId:63df0d4c45c21cc7cd3952105dc85270df81d19a2a6116e9d5c3a43b3c41d9dc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1709232450073822218,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.na
me: etcd-pause-848791,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 048336b3943725df307a6a6dcf28ff99,},Annotations:map[string]string{io.kubernetes.container.hash: 9b7688fe,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0293df94c7faabe36e668dea3a4c280fc5c47f3684ca2610f72a365d980c587d,PodSandboxId:e9c962de8f376e49e0d838257e1233645c3d925aee2f7c4c14110b247eedddce,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1709232450177858530,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-8487
91,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79d3ca487c0ab7d16b95c0911752c3c9,},Annotations:map[string]string{io.kubernetes.container.hash: 35482996,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1698055d49d7942b06c62f78ab6d58bfe5a511ec064ec35566d3626dab70f969,PodSandboxId:bf1c2ad33f3b913ad156869f92db2b8b8ced421cf284518812588d3607e2f625,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1709232450104417653,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-848791,io.kubernetes.po
d.namespace: kube-system,io.kubernetes.pod.uid: e52a40e76baf97a307c38f1a6ffe05c5,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcaeddb617b386f721fdbd313347a4c765b8337499ef9ddbc68ce341569f2fcf,PodSandboxId:179a79775293c5b3bbc399bcc74306563e986a623e4707b8ede21bc21efa9973,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1709232450029518679,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l2m9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 41adf7f1-0c82-4136-a271-819137db321b,},Annotations:map[string]string{io.kubernetes.container.hash: 971943ed,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ac09d5dc-a670-4d30-8a3e-eb2e19e5af78 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 18:48:06 pause-848791 crio[2547]: time="2024-02-29 18:48:06.767513346Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1a3bde98-9a5f-4e46-94a6-0da90f8ed8fb name=/runtime.v1.RuntimeService/Version
	Feb 29 18:48:06 pause-848791 crio[2547]: time="2024-02-29 18:48:06.767646642Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1a3bde98-9a5f-4e46-94a6-0da90f8ed8fb name=/runtime.v1.RuntimeService/Version
	Feb 29 18:48:06 pause-848791 crio[2547]: time="2024-02-29 18:48:06.769415595Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bac18732-45a1-4ef7-a1ac-9a6b0c1b053d name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 18:48:06 pause-848791 crio[2547]: time="2024-02-29 18:48:06.769918476Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709232486769894304,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bac18732-45a1-4ef7-a1ac-9a6b0c1b053d name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 18:48:06 pause-848791 crio[2547]: time="2024-02-29 18:48:06.770986833Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=54392370-1dda-4a7e-90e8-f446c904f5ff name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 18:48:06 pause-848791 crio[2547]: time="2024-02-29 18:48:06.771356106Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=54392370-1dda-4a7e-90e8-f446c904f5ff name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 18:48:06 pause-848791 crio[2547]: time="2024-02-29 18:48:06.771679719Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cbe3e63c84bcdfc2a9e32f8b51b9563e5c2ba10cdb986e775bae6f10e977eb65,PodSandboxId:d6910a43eebae742920cb0842f6ac6629db9afc827a238fd5298d74ed20edabb,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709232481947834026,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-h88pr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd96b56f-afb7-4472-b92a-2026983e58bd,},Annotations:map[string]string{io.kubernetes.container.hash: dccff3c2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa631b99c5f8167398dcd7410ce8a8ba4cebf7088379b12a7dffa5e5d6d12a58,PodSandboxId:33bacdf5d10186ed0411cf986be347f987c657b3a743a387f5cc09e8687a1f6a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709232465170075379,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-848791,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: 6eb35ebc21e5130b09eb73823cab2d15,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:996c3c479f668777630d386ba70c091466ebe2d267160ed282e9361696cbffcf,PodSandboxId:3bfd52160851e38bc759a337ad977effdce3e397e10accac90b18599be07e815,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709232465257748661,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-848791,io.kubernetes.pod.namespace: kub
e-system,io.kubernetes.pod.uid: e52a40e76baf97a307c38f1a6ffe05c5,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50ebd4f90068611d5c0c2eafe3a7e1b4a0e88163f29af55dd0284113aec8522e,PodSandboxId:5b83cb3b60fd48edf78ae8b5c93f79f97a148c93e342a3654d1922137cd02fc1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709232465211642496,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-848791,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 79d3ca487c0ab7d16b95c0911752c3c9,},Annotations:map[string]string{io.kubernetes.container.hash: 35482996,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52e60b520e4df16de45d7213e50215a6cd3736d514def230d116774ba4b875f9,PodSandboxId:1252249eb3f9cb4129ef4c27e2fb358b775778cab89745243ee62925c26c3a46,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709232464897420495,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l2m9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41adf7f1-0c82-4136-a271
-819137db321b,},Annotations:map[string]string{io.kubernetes.container.hash: 971943ed,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c3841f85eb528d7a7eadf7119115c698c72e631581d922d2b823180d9fed894,PodSandboxId:e539b028e8a9d176940ad917461de7993d8c086422b6778b52c21f0e640c816e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709232465160034681,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-848791,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 048336b3943725df307a6a6dcf28ff99,},Annotations:map[string]string{io
.kubernetes.container.hash: 9b7688fe,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75d80eeba4737b9bd73fec56902d2fc18e6770ff5b7f6e9ee9c82acee198e7dc,PodSandboxId:8ac8a124adbd11fdd69633e3f4a64dfbb884729ec3d083426ea37ea367c2d0d7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1709232450816429952,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-h88pr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd96b56f-afb7-4472-b92a-2026983e58bd,},Annotations:map[string]string{io.kubernetes.container.hash: dccf
f3c2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0dc0e68d780b68bbe86835c32ac6de7d8024a47c70a97609bcabe11af0b5c75d,PodSandboxId:ee2ecf3c9f0d1e4c06552ff7a5c0e154d561deff4e2f90956b073d4f234810a5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1709232450264796874,Labels:map[string]string{io.kubernetes.container.name: kube-controlle
r-manager,io.kubernetes.pod.name: kube-controller-manager-pause-848791,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6eb35ebc21e5130b09eb73823cab2d15,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40864953bcc58e61ed4476305a2f44e9ed90ebc42bc9e7c965e252e5fd1d64be,PodSandboxId:63df0d4c45c21cc7cd3952105dc85270df81d19a2a6116e9d5c3a43b3c41d9dc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1709232450073822218,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.na
me: etcd-pause-848791,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 048336b3943725df307a6a6dcf28ff99,},Annotations:map[string]string{io.kubernetes.container.hash: 9b7688fe,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0293df94c7faabe36e668dea3a4c280fc5c47f3684ca2610f72a365d980c587d,PodSandboxId:e9c962de8f376e49e0d838257e1233645c3d925aee2f7c4c14110b247eedddce,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1709232450177858530,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-8487
91,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79d3ca487c0ab7d16b95c0911752c3c9,},Annotations:map[string]string{io.kubernetes.container.hash: 35482996,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1698055d49d7942b06c62f78ab6d58bfe5a511ec064ec35566d3626dab70f969,PodSandboxId:bf1c2ad33f3b913ad156869f92db2b8b8ced421cf284518812588d3607e2f625,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1709232450104417653,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-848791,io.kubernetes.po
d.namespace: kube-system,io.kubernetes.pod.uid: e52a40e76baf97a307c38f1a6ffe05c5,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcaeddb617b386f721fdbd313347a4c765b8337499ef9ddbc68ce341569f2fcf,PodSandboxId:179a79775293c5b3bbc399bcc74306563e986a623e4707b8ede21bc21efa9973,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1709232450029518679,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l2m9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 41adf7f1-0c82-4136-a271-819137db321b,},Annotations:map[string]string{io.kubernetes.container.hash: 971943ed,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=54392370-1dda-4a7e-90e8-f446c904f5ff name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 18:48:06 pause-848791 crio[2547]: time="2024-02-29 18:48:06.821170126Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0727d701-af48-46b9-9e6c-2ae5190ba9ed name=/runtime.v1.RuntimeService/Version
	Feb 29 18:48:06 pause-848791 crio[2547]: time="2024-02-29 18:48:06.821279145Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0727d701-af48-46b9-9e6c-2ae5190ba9ed name=/runtime.v1.RuntimeService/Version
	Feb 29 18:48:06 pause-848791 crio[2547]: time="2024-02-29 18:48:06.822695663Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=69d7e605-2666-4323-bd2f-adc7bd2e99b4 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 18:48:06 pause-848791 crio[2547]: time="2024-02-29 18:48:06.823131808Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709232486823095639,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=69d7e605-2666-4323-bd2f-adc7bd2e99b4 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 18:48:06 pause-848791 crio[2547]: time="2024-02-29 18:48:06.823658136Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4e9c7853-c34e-4bd5-aac8-f766067d3883 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 18:48:06 pause-848791 crio[2547]: time="2024-02-29 18:48:06.823711300Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4e9c7853-c34e-4bd5-aac8-f766067d3883 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 18:48:06 pause-848791 crio[2547]: time="2024-02-29 18:48:06.823956844Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cbe3e63c84bcdfc2a9e32f8b51b9563e5c2ba10cdb986e775bae6f10e977eb65,PodSandboxId:d6910a43eebae742920cb0842f6ac6629db9afc827a238fd5298d74ed20edabb,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709232481947834026,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-h88pr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd96b56f-afb7-4472-b92a-2026983e58bd,},Annotations:map[string]string{io.kubernetes.container.hash: dccff3c2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa631b99c5f8167398dcd7410ce8a8ba4cebf7088379b12a7dffa5e5d6d12a58,PodSandboxId:33bacdf5d10186ed0411cf986be347f987c657b3a743a387f5cc09e8687a1f6a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709232465170075379,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-848791,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: 6eb35ebc21e5130b09eb73823cab2d15,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:996c3c479f668777630d386ba70c091466ebe2d267160ed282e9361696cbffcf,PodSandboxId:3bfd52160851e38bc759a337ad977effdce3e397e10accac90b18599be07e815,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709232465257748661,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-848791,io.kubernetes.pod.namespace: kub
e-system,io.kubernetes.pod.uid: e52a40e76baf97a307c38f1a6ffe05c5,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50ebd4f90068611d5c0c2eafe3a7e1b4a0e88163f29af55dd0284113aec8522e,PodSandboxId:5b83cb3b60fd48edf78ae8b5c93f79f97a148c93e342a3654d1922137cd02fc1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709232465211642496,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-848791,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 79d3ca487c0ab7d16b95c0911752c3c9,},Annotations:map[string]string{io.kubernetes.container.hash: 35482996,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52e60b520e4df16de45d7213e50215a6cd3736d514def230d116774ba4b875f9,PodSandboxId:1252249eb3f9cb4129ef4c27e2fb358b775778cab89745243ee62925c26c3a46,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709232464897420495,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l2m9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41adf7f1-0c82-4136-a271
-819137db321b,},Annotations:map[string]string{io.kubernetes.container.hash: 971943ed,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c3841f85eb528d7a7eadf7119115c698c72e631581d922d2b823180d9fed894,PodSandboxId:e539b028e8a9d176940ad917461de7993d8c086422b6778b52c21f0e640c816e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709232465160034681,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-848791,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 048336b3943725df307a6a6dcf28ff99,},Annotations:map[string]string{io
.kubernetes.container.hash: 9b7688fe,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75d80eeba4737b9bd73fec56902d2fc18e6770ff5b7f6e9ee9c82acee198e7dc,PodSandboxId:8ac8a124adbd11fdd69633e3f4a64dfbb884729ec3d083426ea37ea367c2d0d7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1709232450816429952,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-h88pr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd96b56f-afb7-4472-b92a-2026983e58bd,},Annotations:map[string]string{io.kubernetes.container.hash: dccf
f3c2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0dc0e68d780b68bbe86835c32ac6de7d8024a47c70a97609bcabe11af0b5c75d,PodSandboxId:ee2ecf3c9f0d1e4c06552ff7a5c0e154d561deff4e2f90956b073d4f234810a5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1709232450264796874,Labels:map[string]string{io.kubernetes.container.name: kube-controlle
r-manager,io.kubernetes.pod.name: kube-controller-manager-pause-848791,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6eb35ebc21e5130b09eb73823cab2d15,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40864953bcc58e61ed4476305a2f44e9ed90ebc42bc9e7c965e252e5fd1d64be,PodSandboxId:63df0d4c45c21cc7cd3952105dc85270df81d19a2a6116e9d5c3a43b3c41d9dc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1709232450073822218,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.na
me: etcd-pause-848791,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 048336b3943725df307a6a6dcf28ff99,},Annotations:map[string]string{io.kubernetes.container.hash: 9b7688fe,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0293df94c7faabe36e668dea3a4c280fc5c47f3684ca2610f72a365d980c587d,PodSandboxId:e9c962de8f376e49e0d838257e1233645c3d925aee2f7c4c14110b247eedddce,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1709232450177858530,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-8487
91,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79d3ca487c0ab7d16b95c0911752c3c9,},Annotations:map[string]string{io.kubernetes.container.hash: 35482996,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1698055d49d7942b06c62f78ab6d58bfe5a511ec064ec35566d3626dab70f969,PodSandboxId:bf1c2ad33f3b913ad156869f92db2b8b8ced421cf284518812588d3607e2f625,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1709232450104417653,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-848791,io.kubernetes.po
d.namespace: kube-system,io.kubernetes.pod.uid: e52a40e76baf97a307c38f1a6ffe05c5,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcaeddb617b386f721fdbd313347a4c765b8337499ef9ddbc68ce341569f2fcf,PodSandboxId:179a79775293c5b3bbc399bcc74306563e986a623e4707b8ede21bc21efa9973,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1709232450029518679,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l2m9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 41adf7f1-0c82-4136-a271-819137db321b,},Annotations:map[string]string{io.kubernetes.container.hash: 971943ed,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4e9c7853-c34e-4bd5-aac8-f766067d3883 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	cbe3e63c84bcd       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   4 seconds ago       Running             coredns                   2                   d6910a43eebae       coredns-5dd5756b68-h88pr
	996c3c479f668       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   21 seconds ago      Running             kube-scheduler            2                   3bfd52160851e       kube-scheduler-pause-848791
	50ebd4f900686       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   21 seconds ago      Running             kube-apiserver            2                   5b83cb3b60fd4       kube-apiserver-pause-848791
	aa631b99c5f81       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   21 seconds ago      Running             kube-controller-manager   2                   33bacdf5d1018       kube-controller-manager-pause-848791
	6c3841f85eb52       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   21 seconds ago      Running             etcd                      2                   e539b028e8a9d       etcd-pause-848791
	52e60b520e4df       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   22 seconds ago      Running             kube-proxy                2                   1252249eb3f9c       kube-proxy-l2m9f
	75d80eeba4737       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   36 seconds ago      Exited              coredns                   1                   8ac8a124adbd1       coredns-5dd5756b68-h88pr
	0dc0e68d780b6       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   36 seconds ago      Exited              kube-controller-manager   1                   ee2ecf3c9f0d1       kube-controller-manager-pause-848791
	0293df94c7faa       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   36 seconds ago      Exited              kube-apiserver            1                   e9c962de8f376       kube-apiserver-pause-848791
	1698055d49d79       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   36 seconds ago      Exited              kube-scheduler            1                   bf1c2ad33f3b9       kube-scheduler-pause-848791
	40864953bcc58       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   36 seconds ago      Exited              etcd                      1                   63df0d4c45c21       etcd-pause-848791
	fcaeddb617b38       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   36 seconds ago      Exited              kube-proxy                1                   179a79775293c       kube-proxy-l2m9f
	
	
	==> coredns [75d80eeba4737b9bd73fec56902d2fc18e6770ff5b7f6e9ee9c82acee198e7dc] <==
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:44809 - 31460 "HINFO IN 8728867127159481112.4838692107368353272. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013671482s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> coredns [cbe3e63c84bcdfc2a9e32f8b51b9563e5c2ba10cdb986e775bae6f10e977eb65] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:50176 - 62743 "HINFO IN 8729442428919199151.538908865851346355. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.010574786s
	
	
	==> describe nodes <==
	Name:               pause-848791
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-848791
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f9e09574fdbf0719156a1e892f7aeb8b71f0cf19
	                    minikube.k8s.io/name=pause-848791
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_29T18_45_59_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Feb 2024 18:45:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-848791
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Feb 2024 18:47:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Feb 2024 18:46:20 +0000   Thu, 29 Feb 2024 18:45:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Feb 2024 18:46:20 +0000   Thu, 29 Feb 2024 18:45:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Feb 2024 18:46:20 +0000   Thu, 29 Feb 2024 18:45:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Feb 2024 18:46:20 +0000   Thu, 29 Feb 2024 18:46:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.95
	  Hostname:    pause-848791
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015708Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015708Ki
	  pods:               110
	System Info:
	  Machine ID:                 7c3286c60203476bb89ba13e3695c75b
	  System UUID:                7c3286c6-0203-476b-b89b-a13e3695c75b
	  Boot ID:                    cb1b3dd6-faec-484b-ac50-a4318b0903ee
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-h88pr                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     113s
	  kube-system                 etcd-pause-848791                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         2m7s
	  kube-system                 kube-apiserver-pause-848791             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m9s
	  kube-system                 kube-controller-manager-pause-848791    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m7s
	  kube-system                 kube-proxy-l2m9f                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         113s
	  kube-system                 kube-scheduler-pause-848791             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 109s  kube-proxy       
	  Normal  Starting                 18s   kube-proxy       
	  Normal  Starting                 2m8s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m8s  kubelet          Node pause-848791 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m8s  kubelet          Node pause-848791 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m8s  kubelet          Node pause-848791 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m7s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m7s  kubelet          Node pause-848791 status is now: NodeReady
	  Normal  RegisteredNode           115s  node-controller  Node pause-848791 event: Registered Node pause-848791 in Controller
	  Normal  RegisteredNode           6s    node-controller  Node pause-848791 event: Registered Node pause-848791 in Controller
	
	
	==> dmesg <==
	[  +0.042953] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.726910] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.308707] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +4.698720] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.397140] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.067578] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060350] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.176233] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.150206] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.295067] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +9.839324] systemd-fstab-generator[884]: Ignoring "noauto" option for root device
	[  +0.060434] kauditd_printk_skb: 130 callbacks suppressed
	[  +7.725396] systemd-fstab-generator[1218]: Ignoring "noauto" option for root device
	[  +0.079721] kauditd_printk_skb: 69 callbacks suppressed
	[Feb29 18:46] kauditd_printk_skb: 21 callbacks suppressed
	[ +40.015519] kauditd_printk_skb: 39 callbacks suppressed
	[Feb29 18:47] systemd-fstab-generator[2001]: Ignoring "noauto" option for root device
	[  +0.242061] systemd-fstab-generator[2030]: Ignoring "noauto" option for root device
	[  +0.710405] systemd-fstab-generator[2278]: Ignoring "noauto" option for root device
	[  +0.393218] systemd-fstab-generator[2405]: Ignoring "noauto" option for root device
	[  +0.449988] systemd-fstab-generator[2525]: Ignoring "noauto" option for root device
	[ +13.534980] kauditd_printk_skb: 169 callbacks suppressed
	[ +15.262820] kauditd_printk_skb: 62 callbacks suppressed
	
	
	==> etcd [40864953bcc58e61ed4476305a2f44e9ed90ebc42bc9e7c965e252e5fd1d64be] <==
	{"level":"warn","ts":"2024-02-29T18:47:31.110405Z","caller":"embed/config.go:673","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-02-29T18:47:31.110726Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.72.95:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://192.168.72.95:2380","--initial-cluster=pause-848791=https://192.168.72.95:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.72.95:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.72.95:2380","--name=pause-848791","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-c
a-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	{"level":"info","ts":"2024-02-29T18:47:31.113734Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	{"level":"warn","ts":"2024-02-29T18:47:31.113833Z","caller":"embed/config.go:673","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-02-29T18:47:31.113871Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.72.95:2380"]}
	{"level":"info","ts":"2024-02-29T18:47:31.113941Z","caller":"embed/etcd.go:495","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-02-29T18:47:31.11466Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.95:2379"]}
	{"level":"info","ts":"2024-02-29T18:47:31.116771Z","caller":"embed/etcd.go:309","msg":"starting an etcd server","etcd-version":"3.5.9","git-sha":"bdbbde998","go-version":"go1.19.9","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"pause-848791","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.72.95:2380"],"listen-peer-urls":["https://192.168.72.95:2380"],"advertise-client-urls":["https://192.168.72.95:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.95:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-cluster-
token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	{"level":"info","ts":"2024-02-29T18:47:31.127451Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"10.430729ms"}
	{"level":"info","ts":"2024-02-29T18:47:31.138864Z","caller":"etcdserver/server.go:530","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-02-29T18:47:31.187353Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"52f62c7071e5a955","local-member-id":"96f44c7526de935a","commit-index":427}
	{"level":"info","ts":"2024-02-29T18:47:31.191483Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"96f44c7526de935a switched to configuration voters=()"}
	{"level":"info","ts":"2024-02-29T18:47:31.192887Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"96f44c7526de935a became follower at term 2"}
	{"level":"info","ts":"2024-02-29T18:47:31.19328Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 96f44c7526de935a [peers: [], term: 2, commit: 427, applied: 0, lastindex: 427, lastterm: 2]"}
	{"level":"warn","ts":"2024-02-29T18:47:31.20201Z","caller":"auth/store.go:1238","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-02-29T18:47:31.231815Z","caller":"mvcc/kvstore.go:393","msg":"kvstore restored","current-rev":397}
	{"level":"info","ts":"2024-02-29T18:47:31.24852Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-02-29T18:47:31.254523Z","caller":"etcdserver/corrupt.go:95","msg":"starting initial corruption check","local-member-id":"96f44c7526de935a","timeout":"7s"}
	{"level":"info","ts":"2024-02-29T18:47:31.255247Z","caller":"etcdserver/corrupt.go:165","msg":"initial corruption checking passed; no corruption","local-member-id":"96f44c7526de935a"}
	{"level":"info","ts":"2024-02-29T18:47:31.255501Z","caller":"etcdserver/server.go:854","msg":"starting etcd server","local-member-id":"96f44c7526de935a","local-server-version":"3.5.9","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-02-29T18:47:31.259613Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	
	
	==> etcd [6c3841f85eb528d7a7eadf7119115c698c72e631581d922d2b823180d9fed894] <==
	{"level":"info","ts":"2024-02-29T18:47:45.568277Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.72.95:2380"}
	{"level":"info","ts":"2024-02-29T18:47:45.568812Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"96f44c7526de935a","initial-advertise-peer-urls":["https://192.168.72.95:2380"],"listen-peer-urls":["https://192.168.72.95:2380"],"advertise-client-urls":["https://192.168.72.95:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.95:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-02-29T18:47:45.569094Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-02-29T18:47:45.565765Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-02-29T18:47:45.565928Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-29T18:47:45.575924Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-29T18:47:45.575943Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-29T18:47:45.566211Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"96f44c7526de935a switched to configuration voters=(10877403066053595994)"}
	{"level":"info","ts":"2024-02-29T18:47:45.576078Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"52f62c7071e5a955","local-member-id":"96f44c7526de935a","added-peer-id":"96f44c7526de935a","added-peer-peer-urls":["https://192.168.72.95:2380"]}
	{"level":"info","ts":"2024-02-29T18:47:45.576184Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"52f62c7071e5a955","local-member-id":"96f44c7526de935a","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T18:47:45.576215Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T18:47:47.036278Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"96f44c7526de935a is starting a new election at term 2"}
	{"level":"info","ts":"2024-02-29T18:47:47.036401Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"96f44c7526de935a became pre-candidate at term 2"}
	{"level":"info","ts":"2024-02-29T18:47:47.036448Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"96f44c7526de935a received MsgPreVoteResp from 96f44c7526de935a at term 2"}
	{"level":"info","ts":"2024-02-29T18:47:47.036487Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"96f44c7526de935a became candidate at term 3"}
	{"level":"info","ts":"2024-02-29T18:47:47.036497Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"96f44c7526de935a received MsgVoteResp from 96f44c7526de935a at term 3"}
	{"level":"info","ts":"2024-02-29T18:47:47.036511Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"96f44c7526de935a became leader at term 3"}
	{"level":"info","ts":"2024-02-29T18:47:47.036521Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 96f44c7526de935a elected leader 96f44c7526de935a at term 3"}
	{"level":"info","ts":"2024-02-29T18:47:47.038718Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"96f44c7526de935a","local-member-attributes":"{Name:pause-848791 ClientURLs:[https://192.168.72.95:2379]}","request-path":"/0/members/96f44c7526de935a/attributes","cluster-id":"52f62c7071e5a955","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-29T18:47:47.038788Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T18:47:47.039148Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-29T18:47:47.039227Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-29T18:47:47.038743Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T18:47:47.04124Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.95:2379"}
	{"level":"info","ts":"2024-02-29T18:47:47.041827Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 18:48:07 up 2 min,  0 users,  load average: 2.82, 1.02, 0.37
	Linux pause-848791 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [0293df94c7faabe36e668dea3a4c280fc5c47f3684ca2610f72a365d980c587d] <==
	I0229 18:47:30.956442       1 options.go:220] external host was not specified, using 192.168.72.95
	I0229 18:47:30.957827       1 server.go:148] Version: v1.28.4
	I0229 18:47:30.957870       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-apiserver [50ebd4f90068611d5c0c2eafe3a7e1b4a0e88163f29af55dd0284113aec8522e] <==
	I0229 18:47:48.684511       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0229 18:47:48.684670       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0229 18:47:48.684777       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0229 18:47:48.684873       1 available_controller.go:423] Starting AvailableConditionController
	I0229 18:47:48.684925       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0229 18:47:48.684964       1 controller.go:78] Starting OpenAPI AggregationController
	I0229 18:47:48.685103       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0229 18:47:48.685404       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0229 18:47:48.774738       1 shared_informer.go:318] Caches are synced for configmaps
	I0229 18:47:48.777150       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0229 18:47:48.780256       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0229 18:47:48.780362       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0229 18:47:48.780548       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0229 18:47:48.780738       1 aggregator.go:166] initial CRD sync complete...
	I0229 18:47:48.780754       1 autoregister_controller.go:141] Starting autoregister controller
	I0229 18:47:48.780759       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0229 18:47:48.780764       1 cache.go:39] Caches are synced for autoregister controller
	I0229 18:47:48.784884       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0229 18:47:48.790749       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0229 18:47:48.795688       1 cache.go:39] Caches are synced for AvailableConditionController controller
	E0229 18:47:48.807805       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0229 18:47:48.849064       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0229 18:47:49.681857       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0229 18:48:01.822202       1 controller.go:624] quota admission added evaluator for: endpoints
	I0229 18:48:01.875401       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [0dc0e68d780b68bbe86835c32ac6de7d8024a47c70a97609bcabe11af0b5c75d] <==
	
	
	==> kube-controller-manager [aa631b99c5f8167398dcd7410ce8a8ba4cebf7088379b12a7dffa5e5d6d12a58] <==
	I0229 18:48:01.847657       1 shared_informer.go:318] Caches are synced for service account
	I0229 18:48:01.850729       1 shared_informer.go:318] Caches are synced for deployment
	I0229 18:48:01.853662       1 shared_informer.go:318] Caches are synced for cronjob
	I0229 18:48:01.853690       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0229 18:48:01.857050       1 shared_informer.go:318] Caches are synced for daemon sets
	I0229 18:48:01.859728       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0229 18:48:01.860525       1 shared_informer.go:318] Caches are synced for persistent volume
	I0229 18:48:01.860855       1 shared_informer.go:318] Caches are synced for TTL
	I0229 18:48:01.860933       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0229 18:48:01.861031       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0229 18:48:01.862929       1 shared_informer.go:318] Caches are synced for job
	I0229 18:48:01.867372       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0229 18:48:01.894951       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="27.414569ms"
	I0229 18:48:01.897888       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="71.426µs"
	I0229 18:48:01.933353       1 shared_informer.go:318] Caches are synced for stateful set
	I0229 18:48:01.936978       1 shared_informer.go:318] Caches are synced for disruption
	I0229 18:48:01.980739       1 shared_informer.go:318] Caches are synced for HPA
	I0229 18:48:02.042546       1 shared_informer.go:318] Caches are synced for resource quota
	I0229 18:48:02.042767       1 shared_informer.go:318] Caches are synced for resource quota
	I0229 18:48:02.398778       1 shared_informer.go:318] Caches are synced for garbage collector
	I0229 18:48:02.409525       1 shared_informer.go:318] Caches are synced for garbage collector
	I0229 18:48:02.409739       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0229 18:48:03.085531       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="151.675µs"
	I0229 18:48:03.110795       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="10.788027ms"
	I0229 18:48:03.110896       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="46.77µs"
	
	
	==> kube-proxy [52e60b520e4df16de45d7213e50215a6cd3736d514def230d116774ba4b875f9] <==
	I0229 18:47:46.430435       1 server_others.go:69] "Using iptables proxy"
	I0229 18:47:48.810081       1 node.go:141] Successfully retrieved node IP: 192.168.72.95
	I0229 18:47:48.901243       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0229 18:47:48.901317       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0229 18:47:48.904298       1 server_others.go:152] "Using iptables Proxier"
	I0229 18:47:48.904409       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0229 18:47:48.904811       1 server.go:846] "Version info" version="v1.28.4"
	I0229 18:47:48.904857       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0229 18:47:48.905948       1 config.go:188] "Starting service config controller"
	I0229 18:47:48.906024       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0229 18:47:48.906062       1 config.go:97] "Starting endpoint slice config controller"
	I0229 18:47:48.906105       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0229 18:47:48.906785       1 config.go:315] "Starting node config controller"
	I0229 18:47:48.906820       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0229 18:47:49.007150       1 shared_informer.go:318] Caches are synced for node config
	I0229 18:47:49.007216       1 shared_informer.go:318] Caches are synced for service config
	I0229 18:47:49.007248       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [fcaeddb617b386f721fdbd313347a4c765b8337499ef9ddbc68ce341569f2fcf] <==
	
	
	==> kube-scheduler [1698055d49d7942b06c62f78ab6d58bfe5a511ec064ec35566d3626dab70f969] <==
	
	
	==> kube-scheduler [996c3c479f668777630d386ba70c091466ebe2d267160ed282e9361696cbffcf] <==
	I0229 18:47:46.367536       1 serving.go:348] Generated self-signed cert in-memory
	W0229 18:47:48.759288       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0229 18:47:48.759483       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0229 18:47:48.759814       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0229 18:47:48.759944       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0229 18:47:48.809299       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0229 18:47:48.810968       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0229 18:47:48.813264       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0229 18:47:48.813676       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0229 18:47:48.813856       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0229 18:47:48.814125       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0229 18:47:48.914676       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 29 18:47:45 pause-848791 kubelet[1225]: I0229 18:47:45.846359    1225 status_manager.go:853] "Failed to get status for pod" podUID="41adf7f1-0c82-4136-a271-819137db321b" pod="kube-system/kube-proxy-l2m9f" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-proxy-l2m9f\": dial tcp 192.168.72.95:8443: connect: connection refused"
	Feb 29 18:47:45 pause-848791 kubelet[1225]: I0229 18:47:45.849845    1225 scope.go:117] "RemoveContainer" containerID="75d80eeba4737b9bd73fec56902d2fc18e6770ff5b7f6e9ee9c82acee198e7dc"
	Feb 29 18:47:45 pause-848791 kubelet[1225]: E0229 18:47:45.850772    1225 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-5dd5756b68-h88pr_kube-system(dd96b56f-afb7-4472-b92a-2026983e58bd)\"" pod="kube-system/coredns-5dd5756b68-h88pr" podUID="dd96b56f-afb7-4472-b92a-2026983e58bd"
	Feb 29 18:47:45 pause-848791 kubelet[1225]: I0229 18:47:45.850918    1225 status_manager.go:853] "Failed to get status for pod" podUID="41adf7f1-0c82-4136-a271-819137db321b" pod="kube-system/kube-proxy-l2m9f" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-proxy-l2m9f\": dial tcp 192.168.72.95:8443: connect: connection refused"
	Feb 29 18:47:45 pause-848791 kubelet[1225]: I0229 18:47:45.859214    1225 status_manager.go:853] "Failed to get status for pod" podUID="dd96b56f-afb7-4472-b92a-2026983e58bd" pod="kube-system/coredns-5dd5756b68-h88pr" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-h88pr\": dial tcp 192.168.72.95:8443: connect: connection refused"
	Feb 29 18:47:45 pause-848791 kubelet[1225]: I0229 18:47:45.859888    1225 status_manager.go:853] "Failed to get status for pod" podUID="6eb35ebc21e5130b09eb73823cab2d15" pod="kube-system/kube-controller-manager-pause-848791" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-848791\": dial tcp 192.168.72.95:8443: connect: connection refused"
	Feb 29 18:47:45 pause-848791 kubelet[1225]: I0229 18:47:45.879360    1225 status_manager.go:853] "Failed to get status for pod" podUID="e52a40e76baf97a307c38f1a6ffe05c5" pod="kube-system/kube-scheduler-pause-848791" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-848791\": dial tcp 192.168.72.95:8443: connect: connection refused"
	Feb 29 18:47:45 pause-848791 kubelet[1225]: I0229 18:47:45.883067    1225 status_manager.go:853] "Failed to get status for pod" podUID="048336b3943725df307a6a6dcf28ff99" pod="kube-system/etcd-pause-848791" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-pause-848791\": dial tcp 192.168.72.95:8443: connect: connection refused"
	Feb 29 18:47:45 pause-848791 kubelet[1225]: I0229 18:47:45.883834    1225 status_manager.go:853] "Failed to get status for pod" podUID="79d3ca487c0ab7d16b95c0911752c3c9" pod="kube-system/kube-apiserver-pause-848791" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-848791\": dial tcp 192.168.72.95:8443: connect: connection refused"
	Feb 29 18:47:45 pause-848791 kubelet[1225]: I0229 18:47:45.884751    1225 status_manager.go:853] "Failed to get status for pod" podUID="dd96b56f-afb7-4472-b92a-2026983e58bd" pod="kube-system/coredns-5dd5756b68-h88pr" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-h88pr\": dial tcp 192.168.72.95:8443: connect: connection refused"
	Feb 29 18:47:45 pause-848791 kubelet[1225]: I0229 18:47:45.885316    1225 status_manager.go:853] "Failed to get status for pod" podUID="6eb35ebc21e5130b09eb73823cab2d15" pod="kube-system/kube-controller-manager-pause-848791" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-848791\": dial tcp 192.168.72.95:8443: connect: connection refused"
	Feb 29 18:47:45 pause-848791 kubelet[1225]: I0229 18:47:45.886396    1225 status_manager.go:853] "Failed to get status for pod" podUID="e52a40e76baf97a307c38f1a6ffe05c5" pod="kube-system/kube-scheduler-pause-848791" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-848791\": dial tcp 192.168.72.95:8443: connect: connection refused"
	Feb 29 18:47:45 pause-848791 kubelet[1225]: I0229 18:47:45.887028    1225 status_manager.go:853] "Failed to get status for pod" podUID="048336b3943725df307a6a6dcf28ff99" pod="kube-system/etcd-pause-848791" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-pause-848791\": dial tcp 192.168.72.95:8443: connect: connection refused"
	Feb 29 18:47:45 pause-848791 kubelet[1225]: I0229 18:47:45.887710    1225 status_manager.go:853] "Failed to get status for pod" podUID="79d3ca487c0ab7d16b95c0911752c3c9" pod="kube-system/kube-apiserver-pause-848791" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-848791\": dial tcp 192.168.72.95:8443: connect: connection refused"
	Feb 29 18:47:45 pause-848791 kubelet[1225]: I0229 18:47:45.893844    1225 status_manager.go:853] "Failed to get status for pod" podUID="41adf7f1-0c82-4136-a271-819137db321b" pod="kube-system/kube-proxy-l2m9f" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-proxy-l2m9f\": dial tcp 192.168.72.95:8443: connect: connection refused"
	Feb 29 18:47:46 pause-848791 kubelet[1225]: I0229 18:47:46.888287    1225 scope.go:117] "RemoveContainer" containerID="75d80eeba4737b9bd73fec56902d2fc18e6770ff5b7f6e9ee9c82acee198e7dc"
	Feb 29 18:47:46 pause-848791 kubelet[1225]: E0229 18:47:46.889412    1225 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-5dd5756b68-h88pr_kube-system(dd96b56f-afb7-4472-b92a-2026983e58bd)\"" pod="kube-system/coredns-5dd5756b68-h88pr" podUID="dd96b56f-afb7-4472-b92a-2026983e58bd"
	Feb 29 18:47:47 pause-848791 kubelet[1225]: I0229 18:47:47.887301    1225 scope.go:117] "RemoveContainer" containerID="75d80eeba4737b9bd73fec56902d2fc18e6770ff5b7f6e9ee9c82acee198e7dc"
	Feb 29 18:47:47 pause-848791 kubelet[1225]: E0229 18:47:47.887665    1225 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-5dd5756b68-h88pr_kube-system(dd96b56f-afb7-4472-b92a-2026983e58bd)\"" pod="kube-system/coredns-5dd5756b68-h88pr" podUID="dd96b56f-afb7-4472-b92a-2026983e58bd"
	Feb 29 18:48:00 pause-848791 kubelet[1225]: E0229 18:48:00.039868    1225 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 29 18:48:00 pause-848791 kubelet[1225]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 29 18:48:00 pause-848791 kubelet[1225]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 29 18:48:00 pause-848791 kubelet[1225]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 29 18:48:00 pause-848791 kubelet[1225]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 29 18:48:01 pause-848791 kubelet[1225]: I0229 18:48:01.926882    1225 scope.go:117] "RemoveContainer" containerID="75d80eeba4737b9bd73fec56902d2fc18e6770ff5b7f6e9ee9c82acee198e7dc"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-848791 -n pause-848791
helpers_test.go:261: (dbg) Run:  kubectl --context pause-848791 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-848791 -n pause-848791
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-848791 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-848791 logs -n 25: (1.455547794s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                  Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-587185 sudo cat              | cilium-587185             | jenkins | v1.32.0 | 29 Feb 24 18:44 UTC |                     |
	|         | /lib/systemd/system/containerd.service |                           |         |         |                     |                     |
	| ssh     | -p cilium-587185 sudo cat              | cilium-587185             | jenkins | v1.32.0 | 29 Feb 24 18:44 UTC |                     |
	|         | /etc/containerd/config.toml            |                           |         |         |                     |                     |
	| ssh     | -p cilium-587185 sudo                  | cilium-587185             | jenkins | v1.32.0 | 29 Feb 24 18:44 UTC |                     |
	|         | containerd config dump                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-587185 sudo                  | cilium-587185             | jenkins | v1.32.0 | 29 Feb 24 18:44 UTC |                     |
	|         | systemctl status crio --all            |                           |         |         |                     |                     |
	|         | --full --no-pager                      |                           |         |         |                     |                     |
	| ssh     | -p cilium-587185 sudo                  | cilium-587185             | jenkins | v1.32.0 | 29 Feb 24 18:44 UTC |                     |
	|         | systemctl cat crio --no-pager          |                           |         |         |                     |                     |
	| ssh     | -p cilium-587185 sudo find             | cilium-587185             | jenkins | v1.32.0 | 29 Feb 24 18:44 UTC |                     |
	|         | /etc/crio -type f -exec sh -c          |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                   |                           |         |         |                     |                     |
	| ssh     | -p cilium-587185 sudo crio             | cilium-587185             | jenkins | v1.32.0 | 29 Feb 24 18:44 UTC |                     |
	|         | config                                 |                           |         |         |                     |                     |
	| delete  | -p cilium-587185                       | cilium-587185             | jenkins | v1.32.0 | 29 Feb 24 18:44 UTC | 29 Feb 24 18:44 UTC |
	| start   | -p pause-848791 --memory=2048          | pause-848791              | jenkins | v1.32.0 | 29 Feb 24 18:44 UTC | 29 Feb 24 18:46 UTC |
	|         | --install-addons=false                 |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2               |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-588905            | force-systemd-env-588905  | jenkins | v1.32.0 | 29 Feb 24 18:45 UTC | 29 Feb 24 18:45 UTC |
	| start   | -p cert-expiration-393248              | cert-expiration-393248    | jenkins | v1.32.0 | 29 Feb 24 18:45 UTC | 29 Feb 24 18:46 UTC |
	|         | --memory=2048                          |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                   |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-297898 ssh cat      | force-systemd-flag-297898 | jenkins | v1.32.0 | 29 Feb 24 18:45 UTC | 29 Feb 24 18:45 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-297898           | force-systemd-flag-297898 | jenkins | v1.32.0 | 29 Feb 24 18:45 UTC | 29 Feb 24 18:45 UTC |
	| start   | -p cert-options-009676                 | cert-options-009676       | jenkins | v1.32.0 | 29 Feb 24 18:45 UTC | 29 Feb 24 18:46 UTC |
	|         | --memory=2048                          |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1              |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15          |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost            |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com       |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-541086           | kubernetes-upgrade-541086 | jenkins | v1.32.0 | 29 Feb 24 18:45 UTC | 29 Feb 24 18:45 UTC |
	| start   | -p kubernetes-upgrade-541086           | kubernetes-upgrade-541086 | jenkins | v1.32.0 | 29 Feb 24 18:45 UTC | 29 Feb 24 18:47 UTC |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2      |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| ssh     | cert-options-009676 ssh                | cert-options-009676       | jenkins | v1.32.0 | 29 Feb 24 18:46 UTC | 29 Feb 24 18:46 UTC |
	|         | openssl x509 -text -noout -in          |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt  |                           |         |         |                     |                     |
	| ssh     | -p cert-options-009676 -- sudo         | cert-options-009676       | jenkins | v1.32.0 | 29 Feb 24 18:46 UTC | 29 Feb 24 18:46 UTC |
	|         | cat /etc/kubernetes/admin.conf         |                           |         |         |                     |                     |
	| delete  | -p cert-options-009676                 | cert-options-009676       | jenkins | v1.32.0 | 29 Feb 24 18:46 UTC | 29 Feb 24 18:46 UTC |
	| start   | -p old-k8s-version-631080              | old-k8s-version-631080    | jenkins | v1.32.0 | 29 Feb 24 18:46 UTC |                     |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true          |                           |         |         |                     |                     |
	|         | --kvm-network=default                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                |                           |         |         |                     |                     |
	|         | --keep-context=false                   |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0           |                           |         |         |                     |                     |
	| start   | -p pause-848791                        | pause-848791              | jenkins | v1.32.0 | 29 Feb 24 18:46 UTC | 29 Feb 24 18:48 UTC |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-541086           | kubernetes-upgrade-541086 | jenkins | v1.32.0 | 29 Feb 24 18:47 UTC |                     |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0           |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-541086           | kubernetes-upgrade-541086 | jenkins | v1.32.0 | 29 Feb 24 18:47 UTC | 29 Feb 24 18:47 UTC |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2      |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-541086           | kubernetes-upgrade-541086 | jenkins | v1.32.0 | 29 Feb 24 18:47 UTC | 29 Feb 24 18:47 UTC |
	| start   | -p no-preload-247197                   | no-preload-247197         | jenkins | v1.32.0 | 29 Feb 24 18:47 UTC |                     |
	|         | --memory=2200 --alsologtostderr        |                           |         |         |                     |                     |
	|         | --wait=true --preload=false            |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2      |                           |         |         |                     |                     |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 18:47:48
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 18:47:48.415405   45067 out.go:291] Setting OutFile to fd 1 ...
	I0229 18:47:48.415545   45067 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:47:48.415557   45067 out.go:304] Setting ErrFile to fd 2...
	I0229 18:47:48.415563   45067 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:47:48.415833   45067 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6428/.minikube/bin
	I0229 18:47:48.417027   45067 out.go:298] Setting JSON to false
	I0229 18:47:48.418894   45067 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5413,"bootTime":1709227056,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 18:47:48.418998   45067 start.go:139] virtualization: kvm guest
	I0229 18:47:48.421018   45067 out.go:177] * [no-preload-247197] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 18:47:48.423113   45067 out.go:177]   - MINIKUBE_LOCATION=18259
	I0229 18:47:48.423065   45067 notify.go:220] Checking for updates...
	I0229 18:47:48.424511   45067 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 18:47:48.426360   45067 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18259-6428/kubeconfig
	I0229 18:47:48.427744   45067 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6428/.minikube
	I0229 18:47:48.429066   45067 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 18:47:48.430617   45067 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 18:47:48.432588   45067 config.go:182] Loaded profile config "cert-expiration-393248": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 18:47:48.432780   45067 config.go:182] Loaded profile config "old-k8s-version-631080": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0229 18:47:48.432958   45067 config.go:182] Loaded profile config "pause-848791": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 18:47:48.433075   45067 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 18:47:48.474750   45067 out.go:177] * Using the kvm2 driver based on user configuration
	I0229 18:47:48.475999   45067 start.go:299] selected driver: kvm2
	I0229 18:47:48.476013   45067 start.go:903] validating driver "kvm2" against <nil>
	I0229 18:47:48.476025   45067 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 18:47:48.476762   45067 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:47:48.476833   45067 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18259-6428/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 18:47:48.492846   45067 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 18:47:48.492895   45067 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 18:47:48.493170   45067 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0229 18:47:48.493265   45067 cni.go:84] Creating CNI manager for ""
	I0229 18:47:48.493279   45067 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 18:47:48.493299   45067 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0229 18:47:48.493315   45067 start_flags.go:323] config:
	{Name:no-preload-247197 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-247197 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:47:48.493481   45067 iso.go:125] acquiring lock: {Name:mk4b55cee5696fff78a1389276e4e011ad655e56 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:47:48.495575   45067 out.go:177] * Starting control plane node no-preload-247197 in cluster no-preload-247197
	I0229 18:47:48.496916   45067 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0229 18:47:48.497064   45067 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/no-preload-247197/config.json ...
	I0229 18:47:48.497104   45067 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/no-preload-247197/config.json: {Name:mk7bd922f98febc92ac069a402760ec071d4e822 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:47:48.497213   45067 cache.go:107] acquiring lock: {Name:mk06b7fdf249210ec62788ccdafc872bcfcea452 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:47:48.497221   45067 cache.go:107] acquiring lock: {Name:mkae6606a1bf5cc34f8177d5b5bbc79dd658ace6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:47:48.497235   45067 cache.go:107] acquiring lock: {Name:mk60e308c69e43210797f13239849b555a97cc76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:47:48.497254   45067 cache.go:107] acquiring lock: {Name:mka04c760f627d3cb8a149022a1a807e9c41eca5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:47:48.497287   45067 cache.go:107] acquiring lock: {Name:mkbed3667a1fa6e9621d28444017016a6fd1a369 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:47:48.497305   45067 cache.go:115] /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0229 18:47:48.497309   45067 start.go:365] acquiring machines lock for no-preload-247197: {Name:mk08b242c6b6b7cd4a6843a7d3821a8b435540dd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 18:47:48.497319   45067 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 110.057µs
	I0229 18:47:48.497336   45067 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0229 18:47:48.497336   45067 cache.go:107] acquiring lock: {Name:mk19d8daa969d7d0f0327e27d7a7e329c82532be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:47:48.497352   45067 cache.go:107] acquiring lock: {Name:mk5c2bbd01fb2a58b3fa81ca9ef4e086ccb53efd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:47:48.497387   45067 start.go:369] acquired machines lock for "no-preload-247197" in 65.873µs
	I0229 18:47:48.497389   45067 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0229 18:47:48.497384   45067 cache.go:107] acquiring lock: {Name:mk1bac3238e53014886fa144bcfd676359aa3d56 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:47:48.497423   45067 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0229 18:47:48.497461   45067 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0229 18:47:48.497460   45067 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0229 18:47:48.497483   45067 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0229 18:47:48.497414   45067 start.go:93] Provisioning new machine with config: &{Name:no-preload-247197 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-247197 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0229 18:47:48.497562   45067 start.go:125] createHost starting for "" (driver="kvm2")
	I0229 18:47:48.497591   45067 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0229 18:47:48.713534   44536 api_server.go:279] https://192.168.72.95:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 18:47:48.713573   44536 retry.go:31] will retry after 243.225797ms: https://192.168.72.95:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 18:47:48.957051   44536 api_server.go:253] Checking apiserver healthz at https://192.168.72.95:8443/healthz ...
	I0229 18:47:48.965247   44536 api_server.go:279] https://192.168.72.95:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 18:47:48.965288   44536 retry.go:31] will retry after 373.516106ms: https://192.168.72.95:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 18:47:49.339549   44536 api_server.go:253] Checking apiserver healthz at https://192.168.72.95:8443/healthz ...
	I0229 18:47:49.344358   44536 api_server.go:279] https://192.168.72.95:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 18:47:49.344400   44536 retry.go:31] will retry after 482.004376ms: https://192.168.72.95:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 18:47:49.827047   44536 api_server.go:253] Checking apiserver healthz at https://192.168.72.95:8443/healthz ...
	I0229 18:47:49.836510   44536 api_server.go:279] https://192.168.72.95:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 18:47:49.836607   44536 retry.go:31] will retry after 502.036042ms: https://192.168.72.95:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 18:47:50.338789   44536 api_server.go:253] Checking apiserver healthz at https://192.168.72.95:8443/healthz ...
	I0229 18:47:50.348693   44536 api_server.go:279] https://192.168.72.95:8443/healthz returned 200:
	ok
	I0229 18:47:50.369721   44536 system_pods.go:86] 6 kube-system pods found
	I0229 18:47:50.369754   44536 system_pods.go:89] "coredns-5dd5756b68-h88pr" [dd96b56f-afb7-4472-b92a-2026983e58bd] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 18:47:50.369762   44536 system_pods.go:89] "etcd-pause-848791" [349e8342-f5e1-45b3-b817-238b70f5c18f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0229 18:47:50.369774   44536 system_pods.go:89] "kube-apiserver-pause-848791" [59f63a26-06ce-41f9-9773-a312615cd421] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0229 18:47:50.369785   44536 system_pods.go:89] "kube-controller-manager-pause-848791" [f1fc6c4e-a496-4094-b529-c1a0b010ad1d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0229 18:47:50.369789   44536 system_pods.go:89] "kube-proxy-l2m9f" [41adf7f1-0c82-4136-a271-819137db321b] Running
	I0229 18:47:50.369795   44536 system_pods.go:89] "kube-scheduler-pause-848791" [e4d9b180-cf2d-4347-be1a-93909a8988e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0229 18:47:50.370939   44536 api_server.go:141] control plane version: v1.28.4
	I0229 18:47:50.370963   44536 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.95
	I0229 18:47:50.370969   44536 kubeadm.go:684] Taking a shortcut, as the cluster seems to be properly configured
	I0229 18:47:50.370974   44536 kubeadm.go:640] restartCluster took 7.456363527s
	I0229 18:47:50.370980   44536 kubeadm.go:406] StartCluster complete in 7.577662782s
	I0229 18:47:50.370993   44536 settings.go:142] acquiring lock: {Name:mk2120f70b8c0f8e9d58905a579415af500b3723 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:47:50.371080   44536 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18259-6428/kubeconfig
	I0229 18:47:50.372031   44536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/kubeconfig: {Name:mk7125f243525b7f0feb85371d9c568ed8e0cf7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:47:50.372243   44536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 18:47:50.372370   44536 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 18:47:50.375060   44536 out.go:177] * Enabled addons: 
	I0229 18:47:50.372523   44536 config.go:182] Loaded profile config "pause-848791": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 18:47:50.373102   44536 kapi.go:59] client config for pause-848791: &rest.Config{Host:"https://192.168.72.95:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18259-6428/.minikube/profiles/pause-848791/client.crt", KeyFile:"/home/jenkins/minikube-integration/18259-6428/.minikube/profiles/pause-848791/client.key", CAFile:"/home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string
(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5d0e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 18:47:50.376410   44536 addons.go:505] enable addons completed in 4.044261ms: enabled=[]
	I0229 18:47:50.379950   44536 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-848791" context rescaled to 1 replicas
	I0229 18:47:50.380018   44536 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.95 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0229 18:47:50.381896   44536 out.go:177] * Verifying Kubernetes components...
	I0229 18:47:50.383348   44536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 18:47:50.502672   44536 node_ready.go:35] waiting up to 6m0s for node "pause-848791" to be "Ready" ...
	I0229 18:47:50.502936   44536 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0229 18:47:50.506512   44536 node_ready.go:49] node "pause-848791" has status "Ready":"True"
	I0229 18:47:50.506534   44536 node_ready.go:38] duration metric: took 3.833105ms waiting for node "pause-848791" to be "Ready" ...
	I0229 18:47:50.506545   44536 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 18:47:50.511840   44536 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-h88pr" in "kube-system" namespace to be "Ready" ...
	I0229 18:47:52.520802   44536 pod_ready.go:102] pod "coredns-5dd5756b68-h88pr" in "kube-system" namespace has status "Ready":"False"
	I0229 18:47:48.499985   45067 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0229 18:47:48.497716   45067 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0229 18:47:48.499149   45067 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0229 18:47:48.500165   45067 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 18:47:48.500197   45067 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:47:48.499163   45067 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0229 18:47:48.499182   45067 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0229 18:47:48.499176   45067 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0229 18:47:48.499189   45067 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0229 18:47:48.499305   45067 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0229 18:47:48.500902   45067 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0229 18:47:48.516908   45067 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33017
	I0229 18:47:48.517362   45067 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:47:48.517995   45067 main.go:141] libmachine: Using API Version  1
	I0229 18:47:48.518025   45067 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:47:48.518329   45067 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:47:48.518563   45067 main.go:141] libmachine: (no-preload-247197) Calling .GetMachineName
	I0229 18:47:48.518732   45067 main.go:141] libmachine: (no-preload-247197) Calling .DriverName
	I0229 18:47:48.518939   45067 start.go:159] libmachine.API.Create for "no-preload-247197" (driver="kvm2")
	I0229 18:47:48.519010   45067 client.go:168] LocalClient.Create starting
	I0229 18:47:48.519059   45067 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem
	I0229 18:47:48.519110   45067 main.go:141] libmachine: Decoding PEM data...
	I0229 18:47:48.519135   45067 main.go:141] libmachine: Parsing certificate...
	I0229 18:47:48.519200   45067 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem
	I0229 18:47:48.519225   45067 main.go:141] libmachine: Decoding PEM data...
	I0229 18:47:48.519237   45067 main.go:141] libmachine: Parsing certificate...
	I0229 18:47:48.519262   45067 main.go:141] libmachine: Running pre-create checks...
	I0229 18:47:48.519271   45067 main.go:141] libmachine: (no-preload-247197) Calling .PreCreateCheck
	I0229 18:47:48.519684   45067 main.go:141] libmachine: (no-preload-247197) Calling .GetConfigRaw
	I0229 18:47:48.520111   45067 main.go:141] libmachine: Creating machine...
	I0229 18:47:48.520127   45067 main.go:141] libmachine: (no-preload-247197) Calling .Create
	I0229 18:47:48.520273   45067 main.go:141] libmachine: (no-preload-247197) Creating KVM machine...
	I0229 18:47:48.521861   45067 main.go:141] libmachine: (no-preload-247197) DBG | found existing default KVM network
	I0229 18:47:48.523492   45067 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:47:48.523322   45089 network.go:212] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:26:45:ad} reservation:<nil>}
	I0229 18:47:48.524942   45067 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:47:48.524848   45089 network.go:207] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002889b0}
	I0229 18:47:48.531191   45067 main.go:141] libmachine: (no-preload-247197) DBG | trying to create private KVM network mk-no-preload-247197 192.168.50.0/24...
	I0229 18:47:48.613639   45067 main.go:141] libmachine: (no-preload-247197) DBG | private KVM network mk-no-preload-247197 192.168.50.0/24 created
	I0229 18:47:48.613683   45067 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:47:48.613620   45089 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18259-6428/.minikube
	I0229 18:47:48.613702   45067 main.go:141] libmachine: (no-preload-247197) Setting up store path in /home/jenkins/minikube-integration/18259-6428/.minikube/machines/no-preload-247197 ...
	I0229 18:47:48.613713   45067 main.go:141] libmachine: (no-preload-247197) Building disk image from file:///home/jenkins/minikube-integration/18259-6428/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso
	I0229 18:47:48.613787   45067 main.go:141] libmachine: (no-preload-247197) Downloading /home/jenkins/minikube-integration/18259-6428/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18259-6428/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0229 18:47:48.642384   45067 cache.go:162] opening:  /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9
	I0229 18:47:48.645997   45067 cache.go:162] opening:  /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0229 18:47:48.649979   45067 cache.go:162] opening:  /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0229 18:47:48.655461   45067 cache.go:162] opening:  /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0229 18:47:48.657361   45067 cache.go:162] opening:  /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0229 18:47:48.680883   45067 cache.go:162] opening:  /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0229 18:47:48.711682   45067 cache.go:162] opening:  /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0229 18:47:48.717916   45067 cache.go:157] /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 exists
	I0229 18:47:48.717943   45067 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9" took 220.609833ms
	I0229 18:47:48.717959   45067 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 succeeded
	I0229 18:47:48.846558   45067 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:47:48.846469   45089 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/no-preload-247197/id_rsa...
	I0229 18:47:49.228895   45067 cache.go:157] /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 exists
	I0229 18:47:49.228924   45067 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2" took 731.71791ms
	I0229 18:47:49.228942   45067 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 succeeded
	I0229 18:47:49.264564   45067 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:47:49.264450   45089 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/no-preload-247197/no-preload-247197.rawdisk...
	I0229 18:47:49.264591   45067 main.go:141] libmachine: (no-preload-247197) DBG | Writing magic tar header
	I0229 18:47:49.264611   45067 main.go:141] libmachine: (no-preload-247197) DBG | Writing SSH key tar header
	I0229 18:47:49.264629   45067 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:47:49.264594   45089 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18259-6428/.minikube/machines/no-preload-247197 ...
	I0229 18:47:49.264722   45067 main.go:141] libmachine: (no-preload-247197) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/no-preload-247197
	I0229 18:47:49.264756   45067 main.go:141] libmachine: (no-preload-247197) Setting executable bit set on /home/jenkins/minikube-integration/18259-6428/.minikube/machines/no-preload-247197 (perms=drwx------)
	I0229 18:47:49.264768   45067 main.go:141] libmachine: (no-preload-247197) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6428/.minikube/machines
	I0229 18:47:49.264787   45067 main.go:141] libmachine: (no-preload-247197) Setting executable bit set on /home/jenkins/minikube-integration/18259-6428/.minikube/machines (perms=drwxr-xr-x)
	I0229 18:47:49.264803   45067 main.go:141] libmachine: (no-preload-247197) Setting executable bit set on /home/jenkins/minikube-integration/18259-6428/.minikube (perms=drwxr-xr-x)
	I0229 18:47:49.264812   45067 main.go:141] libmachine: (no-preload-247197) Setting executable bit set on /home/jenkins/minikube-integration/18259-6428 (perms=drwxrwxr-x)
	I0229 18:47:49.264819   45067 main.go:141] libmachine: (no-preload-247197) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0229 18:47:49.264840   45067 main.go:141] libmachine: (no-preload-247197) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0229 18:47:49.264855   45067 main.go:141] libmachine: (no-preload-247197) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6428/.minikube
	I0229 18:47:49.264866   45067 main.go:141] libmachine: (no-preload-247197) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6428
	I0229 18:47:49.264880   45067 main.go:141] libmachine: (no-preload-247197) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0229 18:47:49.264893   45067 main.go:141] libmachine: (no-preload-247197) DBG | Checking permissions on dir: /home/jenkins
	I0229 18:47:49.264905   45067 main.go:141] libmachine: (no-preload-247197) DBG | Checking permissions on dir: /home
	I0229 18:47:49.264913   45067 main.go:141] libmachine: (no-preload-247197) Creating domain...
	I0229 18:47:49.264930   45067 main.go:141] libmachine: (no-preload-247197) DBG | Skipping /home - not owner
	I0229 18:47:49.266194   45067 main.go:141] libmachine: (no-preload-247197) define libvirt domain using xml: 
	I0229 18:47:49.266226   45067 main.go:141] libmachine: (no-preload-247197) <domain type='kvm'>
	I0229 18:47:49.266233   45067 main.go:141] libmachine: (no-preload-247197)   <name>no-preload-247197</name>
	I0229 18:47:49.266245   45067 main.go:141] libmachine: (no-preload-247197)   <memory unit='MiB'>2200</memory>
	I0229 18:47:49.266280   45067 main.go:141] libmachine: (no-preload-247197)   <vcpu>2</vcpu>
	I0229 18:47:49.266308   45067 main.go:141] libmachine: (no-preload-247197)   <features>
	I0229 18:47:49.266334   45067 main.go:141] libmachine: (no-preload-247197)     <acpi/>
	I0229 18:47:49.266360   45067 main.go:141] libmachine: (no-preload-247197)     <apic/>
	I0229 18:47:49.266367   45067 main.go:141] libmachine: (no-preload-247197)     <pae/>
	I0229 18:47:49.266377   45067 main.go:141] libmachine: (no-preload-247197)     
	I0229 18:47:49.266392   45067 main.go:141] libmachine: (no-preload-247197)   </features>
	I0229 18:47:49.266401   45067 main.go:141] libmachine: (no-preload-247197)   <cpu mode='host-passthrough'>
	I0229 18:47:49.266412   45067 main.go:141] libmachine: (no-preload-247197)   
	I0229 18:47:49.266421   45067 main.go:141] libmachine: (no-preload-247197)   </cpu>
	I0229 18:47:49.266432   45067 main.go:141] libmachine: (no-preload-247197)   <os>
	I0229 18:47:49.266443   45067 main.go:141] libmachine: (no-preload-247197)     <type>hvm</type>
	I0229 18:47:49.266451   45067 main.go:141] libmachine: (no-preload-247197)     <boot dev='cdrom'/>
	I0229 18:47:49.266466   45067 main.go:141] libmachine: (no-preload-247197)     <boot dev='hd'/>
	I0229 18:47:49.266487   45067 main.go:141] libmachine: (no-preload-247197)     <bootmenu enable='no'/>
	I0229 18:47:49.266503   45067 main.go:141] libmachine: (no-preload-247197)   </os>
	I0229 18:47:49.266515   45067 main.go:141] libmachine: (no-preload-247197)   <devices>
	I0229 18:47:49.266534   45067 main.go:141] libmachine: (no-preload-247197)     <disk type='file' device='cdrom'>
	I0229 18:47:49.266552   45067 main.go:141] libmachine: (no-preload-247197)       <source file='/home/jenkins/minikube-integration/18259-6428/.minikube/machines/no-preload-247197/boot2docker.iso'/>
	I0229 18:47:49.266568   45067 main.go:141] libmachine: (no-preload-247197)       <target dev='hdc' bus='scsi'/>
	I0229 18:47:49.266579   45067 main.go:141] libmachine: (no-preload-247197)       <readonly/>
	I0229 18:47:49.266594   45067 main.go:141] libmachine: (no-preload-247197)     </disk>
	I0229 18:47:49.266608   45067 main.go:141] libmachine: (no-preload-247197)     <disk type='file' device='disk'>
	I0229 18:47:49.266619   45067 main.go:141] libmachine: (no-preload-247197)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0229 18:47:49.266636   45067 main.go:141] libmachine: (no-preload-247197)       <source file='/home/jenkins/minikube-integration/18259-6428/.minikube/machines/no-preload-247197/no-preload-247197.rawdisk'/>
	I0229 18:47:49.266648   45067 main.go:141] libmachine: (no-preload-247197)       <target dev='hda' bus='virtio'/>
	I0229 18:47:49.266659   45067 main.go:141] libmachine: (no-preload-247197)     </disk>
	I0229 18:47:49.266677   45067 main.go:141] libmachine: (no-preload-247197)     <interface type='network'>
	I0229 18:47:49.266693   45067 main.go:141] libmachine: (no-preload-247197)       <source network='mk-no-preload-247197'/>
	I0229 18:47:49.266706   45067 main.go:141] libmachine: (no-preload-247197)       <model type='virtio'/>
	I0229 18:47:49.266716   45067 main.go:141] libmachine: (no-preload-247197)     </interface>
	I0229 18:47:49.266728   45067 main.go:141] libmachine: (no-preload-247197)     <interface type='network'>
	I0229 18:47:49.266748   45067 main.go:141] libmachine: (no-preload-247197)       <source network='default'/>
	I0229 18:47:49.266759   45067 main.go:141] libmachine: (no-preload-247197)       <model type='virtio'/>
	I0229 18:47:49.266773   45067 main.go:141] libmachine: (no-preload-247197)     </interface>
	I0229 18:47:49.266798   45067 main.go:141] libmachine: (no-preload-247197)     <serial type='pty'>
	I0229 18:47:49.266815   45067 main.go:141] libmachine: (no-preload-247197)       <target port='0'/>
	I0229 18:47:49.266847   45067 main.go:141] libmachine: (no-preload-247197)     </serial>
	I0229 18:47:49.266865   45067 main.go:141] libmachine: (no-preload-247197)     <console type='pty'>
	I0229 18:47:49.266878   45067 main.go:141] libmachine: (no-preload-247197)       <target type='serial' port='0'/>
	I0229 18:47:49.266888   45067 main.go:141] libmachine: (no-preload-247197)     </console>
	I0229 18:47:49.266901   45067 main.go:141] libmachine: (no-preload-247197)     <rng model='virtio'>
	I0229 18:47:49.266919   45067 main.go:141] libmachine: (no-preload-247197)       <backend model='random'>/dev/random</backend>
	I0229 18:47:49.266943   45067 main.go:141] libmachine: (no-preload-247197)     </rng>
	I0229 18:47:49.266958   45067 main.go:141] libmachine: (no-preload-247197)     
	I0229 18:47:49.266974   45067 main.go:141] libmachine: (no-preload-247197)     
	I0229 18:47:49.267000   45067 main.go:141] libmachine: (no-preload-247197)   </devices>
	I0229 18:47:49.267012   45067 main.go:141] libmachine: (no-preload-247197) </domain>
	I0229 18:47:49.267028   45067 main.go:141] libmachine: (no-preload-247197) 
	I0229 18:47:49.271064   45067 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:3e:dc:25 in network default
	I0229 18:47:49.271656   45067 main.go:141] libmachine: (no-preload-247197) Ensuring networks are active...
	I0229 18:47:49.271681   45067 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:47:49.272432   45067 main.go:141] libmachine: (no-preload-247197) Ensuring network default is active
	I0229 18:47:49.272762   45067 main.go:141] libmachine: (no-preload-247197) Ensuring network mk-no-preload-247197 is active
	I0229 18:47:49.273295   45067 main.go:141] libmachine: (no-preload-247197) Getting domain xml...
	I0229 18:47:49.274016   45067 main.go:141] libmachine: (no-preload-247197) Creating domain...
	I0229 18:47:49.827306   45067 cache.go:157] /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 exists
	I0229 18:47:49.827332   45067 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2" took 1.329948794s
	I0229 18:47:49.827346   45067 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 succeeded
	I0229 18:47:49.908648   45067 cache.go:157] /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 exists
	I0229 18:47:49.908677   45067 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2" took 1.411326981s
	I0229 18:47:49.908692   45067 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 succeeded
	I0229 18:47:49.970733   45067 cache.go:157] /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 exists
	I0229 18:47:49.970756   45067 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2" took 1.473543172s
	I0229 18:47:49.970767   45067 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 succeeded
	I0229 18:47:50.081914   45067 cache.go:157] /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0229 18:47:50.081950   45067 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1" took 1.58466357s
	I0229 18:47:50.081974   45067 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0229 18:47:50.282837   45067 cache.go:157] /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 exists
	I0229 18:47:50.282878   45067 cache.go:96] cache image "registry.k8s.io/etcd:3.5.10-0" -> "/home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0" took 1.785652057s
	I0229 18:47:50.282889   45067 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.10-0 -> /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 succeeded
	I0229 18:47:50.282907   45067 cache.go:87] Successfully saved all images to host disk.
	I0229 18:47:50.683510   45067 main.go:141] libmachine: (no-preload-247197) Waiting to get IP...
	I0229 18:47:50.684398   45067 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:47:50.684932   45067 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:47:50.684958   45067 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:47:50.684890   45089 retry.go:31] will retry after 298.188719ms: waiting for machine to come up
	I0229 18:47:50.985259   45067 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:47:50.985811   45067 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:47:50.985859   45067 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:47:50.985749   45089 retry.go:31] will retry after 383.646786ms: waiting for machine to come up
	I0229 18:47:51.371381   45067 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:47:51.371809   45067 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:47:51.371851   45067 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:47:51.371769   45089 retry.go:31] will retry after 327.67165ms: waiting for machine to come up
	I0229 18:47:51.701181   45067 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:47:51.701723   45067 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:47:51.701771   45067 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:47:51.701665   45089 retry.go:31] will retry after 532.283305ms: waiting for machine to come up
	I0229 18:47:52.235358   45067 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:47:52.235744   45067 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:47:52.235781   45067 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:47:52.235696   45089 retry.go:31] will retry after 497.785715ms: waiting for machine to come up
	I0229 18:47:52.735505   45067 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:47:52.735979   45067 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:47:52.736006   45067 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:47:52.735951   45089 retry.go:31] will retry after 636.91864ms: waiting for machine to come up
	I0229 18:47:53.374979   45067 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:47:53.375506   45067 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:47:53.375534   45067 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:47:53.375463   45089 retry.go:31] will retry after 875.964934ms: waiting for machine to come up
	I0229 18:47:55.021818   44536 pod_ready.go:102] pod "coredns-5dd5756b68-h88pr" in "kube-system" namespace has status "Ready":"False"
	I0229 18:47:57.519355   44536 pod_ready.go:102] pod "coredns-5dd5756b68-h88pr" in "kube-system" namespace has status "Ready":"False"
	I0229 18:47:54.252800   45067 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:47:54.253319   45067 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:47:54.253344   45067 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:47:54.253282   45089 retry.go:31] will retry after 1.430919856s: waiting for machine to come up
	I0229 18:47:55.685937   45067 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:47:55.686401   45067 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:47:55.686446   45067 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:47:55.686358   45089 retry.go:31] will retry after 1.218031611s: waiting for machine to come up
	I0229 18:47:56.905950   45067 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:47:56.906693   45067 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:47:56.906720   45067 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:47:56.906634   45089 retry.go:31] will retry after 1.803107669s: waiting for machine to come up
	I0229 18:47:59.520240   44536 pod_ready.go:102] pod "coredns-5dd5756b68-h88pr" in "kube-system" namespace has status "Ready":"False"
	I0229 18:48:01.520585   44536 pod_ready.go:102] pod "coredns-5dd5756b68-h88pr" in "kube-system" namespace has status "Ready":"False"
	I0229 18:47:58.711609   45067 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:47:58.712108   45067 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:47:58.712142   45067 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:47:58.712079   45089 retry.go:31] will retry after 2.104573546s: waiting for machine to come up
	I0229 18:48:00.818388   45067 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:48:00.818882   45067 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:48:00.818906   45067 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:48:00.818833   45089 retry.go:31] will retry after 2.372598202s: waiting for machine to come up
	I0229 18:48:03.194205   45067 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:48:03.194608   45067 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:48:03.194638   45067 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:48:03.194555   45089 retry.go:31] will retry after 3.060606768s: waiting for machine to come up
	I0229 18:48:03.520673   44536 pod_ready.go:92] pod "coredns-5dd5756b68-h88pr" in "kube-system" namespace has status "Ready":"True"
	I0229 18:48:03.520696   44536 pod_ready.go:81] duration metric: took 13.008837482s waiting for pod "coredns-5dd5756b68-h88pr" in "kube-system" namespace to be "Ready" ...
	I0229 18:48:03.520707   44536 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-848791" in "kube-system" namespace to be "Ready" ...
	I0229 18:48:03.526448   44536 pod_ready.go:92] pod "etcd-pause-848791" in "kube-system" namespace has status "Ready":"True"
	I0229 18:48:03.526470   44536 pod_ready.go:81] duration metric: took 5.756933ms waiting for pod "etcd-pause-848791" in "kube-system" namespace to be "Ready" ...
	I0229 18:48:03.526479   44536 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-848791" in "kube-system" namespace to be "Ready" ...
	I0229 18:48:05.038466   44536 pod_ready.go:92] pod "kube-apiserver-pause-848791" in "kube-system" namespace has status "Ready":"True"
	I0229 18:48:05.038490   44536 pod_ready.go:81] duration metric: took 1.512003862s waiting for pod "kube-apiserver-pause-848791" in "kube-system" namespace to be "Ready" ...
	I0229 18:48:05.038499   44536 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-848791" in "kube-system" namespace to be "Ready" ...
	I0229 18:48:05.045588   44536 pod_ready.go:92] pod "kube-controller-manager-pause-848791" in "kube-system" namespace has status "Ready":"True"
	I0229 18:48:05.045615   44536 pod_ready.go:81] duration metric: took 7.108652ms waiting for pod "kube-controller-manager-pause-848791" in "kube-system" namespace to be "Ready" ...
	I0229 18:48:05.045628   44536 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-l2m9f" in "kube-system" namespace to be "Ready" ...
	I0229 18:48:05.056050   44536 pod_ready.go:92] pod "kube-proxy-l2m9f" in "kube-system" namespace has status "Ready":"True"
	I0229 18:48:05.056072   44536 pod_ready.go:81] duration metric: took 10.437029ms waiting for pod "kube-proxy-l2m9f" in "kube-system" namespace to be "Ready" ...
	I0229 18:48:05.056081   44536 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-848791" in "kube-system" namespace to be "Ready" ...
	I0229 18:48:05.118583   44536 pod_ready.go:92] pod "kube-scheduler-pause-848791" in "kube-system" namespace has status "Ready":"True"
	I0229 18:48:05.118605   44536 pod_ready.go:81] duration metric: took 62.517974ms waiting for pod "kube-scheduler-pause-848791" in "kube-system" namespace to be "Ready" ...
	I0229 18:48:05.118613   44536 pod_ready.go:38] duration metric: took 14.612056214s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 18:48:05.118628   44536 api_server.go:52] waiting for apiserver process to appear ...
	I0229 18:48:05.118673   44536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:48:05.146323   44536 api_server.go:72] duration metric: took 14.766251771s to wait for apiserver process to appear ...
	I0229 18:48:05.146354   44536 api_server.go:88] waiting for apiserver healthz status ...
	I0229 18:48:05.146375   44536 api_server.go:253] Checking apiserver healthz at https://192.168.72.95:8443/healthz ...
	I0229 18:48:05.154472   44536 api_server.go:279] https://192.168.72.95:8443/healthz returned 200:
	ok
	I0229 18:48:05.158525   44536 api_server.go:141] control plane version: v1.28.4
	I0229 18:48:05.158551   44536 api_server.go:131] duration metric: took 12.188675ms to wait for apiserver health ...
	I0229 18:48:05.158562   44536 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 18:48:05.320621   44536 system_pods.go:59] 6 kube-system pods found
	I0229 18:48:05.320648   44536 system_pods.go:61] "coredns-5dd5756b68-h88pr" [dd96b56f-afb7-4472-b92a-2026983e58bd] Running
	I0229 18:48:05.320654   44536 system_pods.go:61] "etcd-pause-848791" [349e8342-f5e1-45b3-b817-238b70f5c18f] Running
	I0229 18:48:05.320659   44536 system_pods.go:61] "kube-apiserver-pause-848791" [59f63a26-06ce-41f9-9773-a312615cd421] Running
	I0229 18:48:05.320664   44536 system_pods.go:61] "kube-controller-manager-pause-848791" [f1fc6c4e-a496-4094-b529-c1a0b010ad1d] Running
	I0229 18:48:05.320677   44536 system_pods.go:61] "kube-proxy-l2m9f" [41adf7f1-0c82-4136-a271-819137db321b] Running
	I0229 18:48:05.320682   44536 system_pods.go:61] "kube-scheduler-pause-848791" [e4d9b180-cf2d-4347-be1a-93909a8988e3] Running
	I0229 18:48:05.320688   44536 system_pods.go:74] duration metric: took 162.119764ms to wait for pod list to return data ...
	I0229 18:48:05.320698   44536 default_sa.go:34] waiting for default service account to be created ...
	I0229 18:48:05.516725   44536 default_sa.go:45] found service account: "default"
	I0229 18:48:05.516750   44536 default_sa.go:55] duration metric: took 196.045292ms for default service account to be created ...
	I0229 18:48:05.516760   44536 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 18:48:05.719711   44536 system_pods.go:86] 6 kube-system pods found
	I0229 18:48:05.719742   44536 system_pods.go:89] "coredns-5dd5756b68-h88pr" [dd96b56f-afb7-4472-b92a-2026983e58bd] Running
	I0229 18:48:05.719748   44536 system_pods.go:89] "etcd-pause-848791" [349e8342-f5e1-45b3-b817-238b70f5c18f] Running
	I0229 18:48:05.719753   44536 system_pods.go:89] "kube-apiserver-pause-848791" [59f63a26-06ce-41f9-9773-a312615cd421] Running
	I0229 18:48:05.719760   44536 system_pods.go:89] "kube-controller-manager-pause-848791" [f1fc6c4e-a496-4094-b529-c1a0b010ad1d] Running
	I0229 18:48:05.719766   44536 system_pods.go:89] "kube-proxy-l2m9f" [41adf7f1-0c82-4136-a271-819137db321b] Running
	I0229 18:48:05.719772   44536 system_pods.go:89] "kube-scheduler-pause-848791" [e4d9b180-cf2d-4347-be1a-93909a8988e3] Running
	I0229 18:48:05.719781   44536 system_pods.go:126] duration metric: took 203.014596ms to wait for k8s-apps to be running ...
	I0229 18:48:05.719791   44536 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 18:48:05.719841   44536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 18:48:05.737449   44536 system_svc.go:56] duration metric: took 17.647432ms WaitForService to wait for kubelet.
	I0229 18:48:05.737482   44536 kubeadm.go:581] duration metric: took 15.357414146s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 18:48:05.737502   44536 node_conditions.go:102] verifying NodePressure condition ...
	I0229 18:48:05.918606   44536 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 18:48:05.918640   44536 node_conditions.go:123] node cpu capacity is 2
	I0229 18:48:05.918654   44536 node_conditions.go:105] duration metric: took 181.146465ms to run NodePressure ...
	I0229 18:48:05.918668   44536 start.go:228] waiting for startup goroutines ...
	I0229 18:48:05.918698   44536 start.go:233] waiting for cluster config update ...
	I0229 18:48:05.918712   44536 start.go:242] writing updated cluster config ...
	I0229 18:48:05.919051   44536 ssh_runner.go:195] Run: rm -f paused
	I0229 18:48:05.973608   44536 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0229 18:48:05.975815   44536 out.go:177] * Done! kubectl is now configured to use "pause-848791" cluster and "default" namespace by default
	I0229 18:48:06.257031   45067 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:48:06.257484   45067 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:48:06.257512   45067 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:48:06.257433   45089 retry.go:31] will retry after 5.297916204s: waiting for machine to come up
	
	
	==> CRI-O <==
	Feb 29 18:48:08 pause-848791 crio[2547]: time="2024-02-29 18:48:08.764635202Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709232488764606242,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7d458021-a20f-4225-9bd1-f485ab4af648 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 18:48:08 pause-848791 crio[2547]: time="2024-02-29 18:48:08.765106954Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=64aacd53-ce4c-425c-98cc-243c4238c822 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 18:48:08 pause-848791 crio[2547]: time="2024-02-29 18:48:08.765185432Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=64aacd53-ce4c-425c-98cc-243c4238c822 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 18:48:08 pause-848791 crio[2547]: time="2024-02-29 18:48:08.765771627Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cbe3e63c84bcdfc2a9e32f8b51b9563e5c2ba10cdb986e775bae6f10e977eb65,PodSandboxId:d6910a43eebae742920cb0842f6ac6629db9afc827a238fd5298d74ed20edabb,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709232481947834026,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-h88pr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd96b56f-afb7-4472-b92a-2026983e58bd,},Annotations:map[string]string{io.kubernetes.container.hash: dccff3c2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa631b99c5f8167398dcd7410ce8a8ba4cebf7088379b12a7dffa5e5d6d12a58,PodSandboxId:33bacdf5d10186ed0411cf986be347f987c657b3a743a387f5cc09e8687a1f6a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709232465170075379,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-848791,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: 6eb35ebc21e5130b09eb73823cab2d15,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:996c3c479f668777630d386ba70c091466ebe2d267160ed282e9361696cbffcf,PodSandboxId:3bfd52160851e38bc759a337ad977effdce3e397e10accac90b18599be07e815,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709232465257748661,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-848791,io.kubernetes.pod.namespace: kub
e-system,io.kubernetes.pod.uid: e52a40e76baf97a307c38f1a6ffe05c5,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50ebd4f90068611d5c0c2eafe3a7e1b4a0e88163f29af55dd0284113aec8522e,PodSandboxId:5b83cb3b60fd48edf78ae8b5c93f79f97a148c93e342a3654d1922137cd02fc1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709232465211642496,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-848791,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 79d3ca487c0ab7d16b95c0911752c3c9,},Annotations:map[string]string{io.kubernetes.container.hash: 35482996,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52e60b520e4df16de45d7213e50215a6cd3736d514def230d116774ba4b875f9,PodSandboxId:1252249eb3f9cb4129ef4c27e2fb358b775778cab89745243ee62925c26c3a46,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709232464897420495,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l2m9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41adf7f1-0c82-4136-a271
-819137db321b,},Annotations:map[string]string{io.kubernetes.container.hash: 971943ed,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c3841f85eb528d7a7eadf7119115c698c72e631581d922d2b823180d9fed894,PodSandboxId:e539b028e8a9d176940ad917461de7993d8c086422b6778b52c21f0e640c816e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709232465160034681,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-848791,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 048336b3943725df307a6a6dcf28ff99,},Annotations:map[string]string{io
.kubernetes.container.hash: 9b7688fe,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75d80eeba4737b9bd73fec56902d2fc18e6770ff5b7f6e9ee9c82acee198e7dc,PodSandboxId:8ac8a124adbd11fdd69633e3f4a64dfbb884729ec3d083426ea37ea367c2d0d7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1709232450816429952,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-h88pr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd96b56f-afb7-4472-b92a-2026983e58bd,},Annotations:map[string]string{io.kubernetes.container.hash: dccf
f3c2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0dc0e68d780b68bbe86835c32ac6de7d8024a47c70a97609bcabe11af0b5c75d,PodSandboxId:ee2ecf3c9f0d1e4c06552ff7a5c0e154d561deff4e2f90956b073d4f234810a5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1709232450264796874,Labels:map[string]string{io.kubernetes.container.name: kube-controlle
r-manager,io.kubernetes.pod.name: kube-controller-manager-pause-848791,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6eb35ebc21e5130b09eb73823cab2d15,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40864953bcc58e61ed4476305a2f44e9ed90ebc42bc9e7c965e252e5fd1d64be,PodSandboxId:63df0d4c45c21cc7cd3952105dc85270df81d19a2a6116e9d5c3a43b3c41d9dc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1709232450073822218,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.na
me: etcd-pause-848791,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 048336b3943725df307a6a6dcf28ff99,},Annotations:map[string]string{io.kubernetes.container.hash: 9b7688fe,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0293df94c7faabe36e668dea3a4c280fc5c47f3684ca2610f72a365d980c587d,PodSandboxId:e9c962de8f376e49e0d838257e1233645c3d925aee2f7c4c14110b247eedddce,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1709232450177858530,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-8487
91,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79d3ca487c0ab7d16b95c0911752c3c9,},Annotations:map[string]string{io.kubernetes.container.hash: 35482996,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1698055d49d7942b06c62f78ab6d58bfe5a511ec064ec35566d3626dab70f969,PodSandboxId:bf1c2ad33f3b913ad156869f92db2b8b8ced421cf284518812588d3607e2f625,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1709232450104417653,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-848791,io.kubernetes.po
d.namespace: kube-system,io.kubernetes.pod.uid: e52a40e76baf97a307c38f1a6ffe05c5,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcaeddb617b386f721fdbd313347a4c765b8337499ef9ddbc68ce341569f2fcf,PodSandboxId:179a79775293c5b3bbc399bcc74306563e986a623e4707b8ede21bc21efa9973,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1709232450029518679,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l2m9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 41adf7f1-0c82-4136-a271-819137db321b,},Annotations:map[string]string{io.kubernetes.container.hash: 971943ed,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=64aacd53-ce4c-425c-98cc-243c4238c822 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 18:48:08 pause-848791 crio[2547]: time="2024-02-29 18:48:08.812231353Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6f227abb-1007-4f49-aa57-0339a0efd4e7 name=/runtime.v1.RuntimeService/Version
	Feb 29 18:48:08 pause-848791 crio[2547]: time="2024-02-29 18:48:08.812307647Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6f227abb-1007-4f49-aa57-0339a0efd4e7 name=/runtime.v1.RuntimeService/Version
	Feb 29 18:48:08 pause-848791 crio[2547]: time="2024-02-29 18:48:08.813531754Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ff3da765-b872-4d36-b1eb-b817c9c3e2a6 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 18:48:08 pause-848791 crio[2547]: time="2024-02-29 18:48:08.814038216Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709232488814015250,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ff3da765-b872-4d36-b1eb-b817c9c3e2a6 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 18:48:08 pause-848791 crio[2547]: time="2024-02-29 18:48:08.814476342Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=997185ab-8ff3-44a0-9e34-59964056cc53 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 18:48:08 pause-848791 crio[2547]: time="2024-02-29 18:48:08.814767382Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=997185ab-8ff3-44a0-9e34-59964056cc53 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 18:48:08 pause-848791 crio[2547]: time="2024-02-29 18:48:08.815086209Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cbe3e63c84bcdfc2a9e32f8b51b9563e5c2ba10cdb986e775bae6f10e977eb65,PodSandboxId:d6910a43eebae742920cb0842f6ac6629db9afc827a238fd5298d74ed20edabb,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709232481947834026,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-h88pr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd96b56f-afb7-4472-b92a-2026983e58bd,},Annotations:map[string]string{io.kubernetes.container.hash: dccff3c2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa631b99c5f8167398dcd7410ce8a8ba4cebf7088379b12a7dffa5e5d6d12a58,PodSandboxId:33bacdf5d10186ed0411cf986be347f987c657b3a743a387f5cc09e8687a1f6a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709232465170075379,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-848791,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: 6eb35ebc21e5130b09eb73823cab2d15,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:996c3c479f668777630d386ba70c091466ebe2d267160ed282e9361696cbffcf,PodSandboxId:3bfd52160851e38bc759a337ad977effdce3e397e10accac90b18599be07e815,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709232465257748661,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-848791,io.kubernetes.pod.namespace: kub
e-system,io.kubernetes.pod.uid: e52a40e76baf97a307c38f1a6ffe05c5,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50ebd4f90068611d5c0c2eafe3a7e1b4a0e88163f29af55dd0284113aec8522e,PodSandboxId:5b83cb3b60fd48edf78ae8b5c93f79f97a148c93e342a3654d1922137cd02fc1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709232465211642496,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-848791,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 79d3ca487c0ab7d16b95c0911752c3c9,},Annotations:map[string]string{io.kubernetes.container.hash: 35482996,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52e60b520e4df16de45d7213e50215a6cd3736d514def230d116774ba4b875f9,PodSandboxId:1252249eb3f9cb4129ef4c27e2fb358b775778cab89745243ee62925c26c3a46,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709232464897420495,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l2m9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41adf7f1-0c82-4136-a271
-819137db321b,},Annotations:map[string]string{io.kubernetes.container.hash: 971943ed,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c3841f85eb528d7a7eadf7119115c698c72e631581d922d2b823180d9fed894,PodSandboxId:e539b028e8a9d176940ad917461de7993d8c086422b6778b52c21f0e640c816e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709232465160034681,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-848791,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 048336b3943725df307a6a6dcf28ff99,},Annotations:map[string]string{io
.kubernetes.container.hash: 9b7688fe,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75d80eeba4737b9bd73fec56902d2fc18e6770ff5b7f6e9ee9c82acee198e7dc,PodSandboxId:8ac8a124adbd11fdd69633e3f4a64dfbb884729ec3d083426ea37ea367c2d0d7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1709232450816429952,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-h88pr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd96b56f-afb7-4472-b92a-2026983e58bd,},Annotations:map[string]string{io.kubernetes.container.hash: dccf
f3c2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0dc0e68d780b68bbe86835c32ac6de7d8024a47c70a97609bcabe11af0b5c75d,PodSandboxId:ee2ecf3c9f0d1e4c06552ff7a5c0e154d561deff4e2f90956b073d4f234810a5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1709232450264796874,Labels:map[string]string{io.kubernetes.container.name: kube-controlle
r-manager,io.kubernetes.pod.name: kube-controller-manager-pause-848791,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6eb35ebc21e5130b09eb73823cab2d15,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40864953bcc58e61ed4476305a2f44e9ed90ebc42bc9e7c965e252e5fd1d64be,PodSandboxId:63df0d4c45c21cc7cd3952105dc85270df81d19a2a6116e9d5c3a43b3c41d9dc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1709232450073822218,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.na
me: etcd-pause-848791,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 048336b3943725df307a6a6dcf28ff99,},Annotations:map[string]string{io.kubernetes.container.hash: 9b7688fe,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0293df94c7faabe36e668dea3a4c280fc5c47f3684ca2610f72a365d980c587d,PodSandboxId:e9c962de8f376e49e0d838257e1233645c3d925aee2f7c4c14110b247eedddce,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1709232450177858530,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-8487
91,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79d3ca487c0ab7d16b95c0911752c3c9,},Annotations:map[string]string{io.kubernetes.container.hash: 35482996,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1698055d49d7942b06c62f78ab6d58bfe5a511ec064ec35566d3626dab70f969,PodSandboxId:bf1c2ad33f3b913ad156869f92db2b8b8ced421cf284518812588d3607e2f625,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1709232450104417653,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-848791,io.kubernetes.po
d.namespace: kube-system,io.kubernetes.pod.uid: e52a40e76baf97a307c38f1a6ffe05c5,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcaeddb617b386f721fdbd313347a4c765b8337499ef9ddbc68ce341569f2fcf,PodSandboxId:179a79775293c5b3bbc399bcc74306563e986a623e4707b8ede21bc21efa9973,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1709232450029518679,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l2m9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 41adf7f1-0c82-4136-a271-819137db321b,},Annotations:map[string]string{io.kubernetes.container.hash: 971943ed,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=997185ab-8ff3-44a0-9e34-59964056cc53 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 18:48:08 pause-848791 crio[2547]: time="2024-02-29 18:48:08.863765744Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e0759a9a-b1a6-4292-a23c-27977fae1454 name=/runtime.v1.RuntimeService/Version
	Feb 29 18:48:08 pause-848791 crio[2547]: time="2024-02-29 18:48:08.863869828Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e0759a9a-b1a6-4292-a23c-27977fae1454 name=/runtime.v1.RuntimeService/Version
	Feb 29 18:48:08 pause-848791 crio[2547]: time="2024-02-29 18:48:08.865926819Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2eda4ce7-93aa-461d-8b4f-a3ebe4c8232c name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 18:48:08 pause-848791 crio[2547]: time="2024-02-29 18:48:08.866417012Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709232488866378107,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2eda4ce7-93aa-461d-8b4f-a3ebe4c8232c name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 18:48:08 pause-848791 crio[2547]: time="2024-02-29 18:48:08.867535102Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6323ef53-71ba-4c5c-a2bd-074e8045419c name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 18:48:08 pause-848791 crio[2547]: time="2024-02-29 18:48:08.867809262Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6323ef53-71ba-4c5c-a2bd-074e8045419c name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 18:48:08 pause-848791 crio[2547]: time="2024-02-29 18:48:08.868086041Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cbe3e63c84bcdfc2a9e32f8b51b9563e5c2ba10cdb986e775bae6f10e977eb65,PodSandboxId:d6910a43eebae742920cb0842f6ac6629db9afc827a238fd5298d74ed20edabb,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709232481947834026,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-h88pr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd96b56f-afb7-4472-b92a-2026983e58bd,},Annotations:map[string]string{io.kubernetes.container.hash: dccff3c2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa631b99c5f8167398dcd7410ce8a8ba4cebf7088379b12a7dffa5e5d6d12a58,PodSandboxId:33bacdf5d10186ed0411cf986be347f987c657b3a743a387f5cc09e8687a1f6a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709232465170075379,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-848791,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: 6eb35ebc21e5130b09eb73823cab2d15,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:996c3c479f668777630d386ba70c091466ebe2d267160ed282e9361696cbffcf,PodSandboxId:3bfd52160851e38bc759a337ad977effdce3e397e10accac90b18599be07e815,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709232465257748661,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-848791,io.kubernetes.pod.namespace: kub
e-system,io.kubernetes.pod.uid: e52a40e76baf97a307c38f1a6ffe05c5,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50ebd4f90068611d5c0c2eafe3a7e1b4a0e88163f29af55dd0284113aec8522e,PodSandboxId:5b83cb3b60fd48edf78ae8b5c93f79f97a148c93e342a3654d1922137cd02fc1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709232465211642496,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-848791,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 79d3ca487c0ab7d16b95c0911752c3c9,},Annotations:map[string]string{io.kubernetes.container.hash: 35482996,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52e60b520e4df16de45d7213e50215a6cd3736d514def230d116774ba4b875f9,PodSandboxId:1252249eb3f9cb4129ef4c27e2fb358b775778cab89745243ee62925c26c3a46,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709232464897420495,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l2m9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41adf7f1-0c82-4136-a271
-819137db321b,},Annotations:map[string]string{io.kubernetes.container.hash: 971943ed,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c3841f85eb528d7a7eadf7119115c698c72e631581d922d2b823180d9fed894,PodSandboxId:e539b028e8a9d176940ad917461de7993d8c086422b6778b52c21f0e640c816e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709232465160034681,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-848791,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 048336b3943725df307a6a6dcf28ff99,},Annotations:map[string]string{io
.kubernetes.container.hash: 9b7688fe,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75d80eeba4737b9bd73fec56902d2fc18e6770ff5b7f6e9ee9c82acee198e7dc,PodSandboxId:8ac8a124adbd11fdd69633e3f4a64dfbb884729ec3d083426ea37ea367c2d0d7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1709232450816429952,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-h88pr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd96b56f-afb7-4472-b92a-2026983e58bd,},Annotations:map[string]string{io.kubernetes.container.hash: dccf
f3c2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0dc0e68d780b68bbe86835c32ac6de7d8024a47c70a97609bcabe11af0b5c75d,PodSandboxId:ee2ecf3c9f0d1e4c06552ff7a5c0e154d561deff4e2f90956b073d4f234810a5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1709232450264796874,Labels:map[string]string{io.kubernetes.container.name: kube-controlle
r-manager,io.kubernetes.pod.name: kube-controller-manager-pause-848791,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6eb35ebc21e5130b09eb73823cab2d15,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40864953bcc58e61ed4476305a2f44e9ed90ebc42bc9e7c965e252e5fd1d64be,PodSandboxId:63df0d4c45c21cc7cd3952105dc85270df81d19a2a6116e9d5c3a43b3c41d9dc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1709232450073822218,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.na
me: etcd-pause-848791,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 048336b3943725df307a6a6dcf28ff99,},Annotations:map[string]string{io.kubernetes.container.hash: 9b7688fe,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0293df94c7faabe36e668dea3a4c280fc5c47f3684ca2610f72a365d980c587d,PodSandboxId:e9c962de8f376e49e0d838257e1233645c3d925aee2f7c4c14110b247eedddce,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1709232450177858530,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-8487
91,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79d3ca487c0ab7d16b95c0911752c3c9,},Annotations:map[string]string{io.kubernetes.container.hash: 35482996,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1698055d49d7942b06c62f78ab6d58bfe5a511ec064ec35566d3626dab70f969,PodSandboxId:bf1c2ad33f3b913ad156869f92db2b8b8ced421cf284518812588d3607e2f625,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1709232450104417653,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-848791,io.kubernetes.po
d.namespace: kube-system,io.kubernetes.pod.uid: e52a40e76baf97a307c38f1a6ffe05c5,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcaeddb617b386f721fdbd313347a4c765b8337499ef9ddbc68ce341569f2fcf,PodSandboxId:179a79775293c5b3bbc399bcc74306563e986a623e4707b8ede21bc21efa9973,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1709232450029518679,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l2m9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 41adf7f1-0c82-4136-a271-819137db321b,},Annotations:map[string]string{io.kubernetes.container.hash: 971943ed,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6323ef53-71ba-4c5c-a2bd-074e8045419c name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 18:48:08 pause-848791 crio[2547]: time="2024-02-29 18:48:08.927288080Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8832b46c-44a2-411d-924e-b6b64fd928a5 name=/runtime.v1.RuntimeService/Version
	Feb 29 18:48:08 pause-848791 crio[2547]: time="2024-02-29 18:48:08.927359460Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8832b46c-44a2-411d-924e-b6b64fd928a5 name=/runtime.v1.RuntimeService/Version
	Feb 29 18:48:08 pause-848791 crio[2547]: time="2024-02-29 18:48:08.930909348Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fff5531b-e625-4c29-b4e0-3260c0e3dba7 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 18:48:08 pause-848791 crio[2547]: time="2024-02-29 18:48:08.931337394Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709232488931313252,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fff5531b-e625-4c29-b4e0-3260c0e3dba7 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 18:48:08 pause-848791 crio[2547]: time="2024-02-29 18:48:08.931965439Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c0adba4f-16af-467e-9532-ebe66ee2fb97 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 18:48:08 pause-848791 crio[2547]: time="2024-02-29 18:48:08.932051645Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c0adba4f-16af-467e-9532-ebe66ee2fb97 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 18:48:08 pause-848791 crio[2547]: time="2024-02-29 18:48:08.932273527Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cbe3e63c84bcdfc2a9e32f8b51b9563e5c2ba10cdb986e775bae6f10e977eb65,PodSandboxId:d6910a43eebae742920cb0842f6ac6629db9afc827a238fd5298d74ed20edabb,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709232481947834026,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-h88pr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd96b56f-afb7-4472-b92a-2026983e58bd,},Annotations:map[string]string{io.kubernetes.container.hash: dccff3c2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa631b99c5f8167398dcd7410ce8a8ba4cebf7088379b12a7dffa5e5d6d12a58,PodSandboxId:33bacdf5d10186ed0411cf986be347f987c657b3a743a387f5cc09e8687a1f6a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709232465170075379,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-848791,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: 6eb35ebc21e5130b09eb73823cab2d15,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:996c3c479f668777630d386ba70c091466ebe2d267160ed282e9361696cbffcf,PodSandboxId:3bfd52160851e38bc759a337ad977effdce3e397e10accac90b18599be07e815,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709232465257748661,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-848791,io.kubernetes.pod.namespace: kub
e-system,io.kubernetes.pod.uid: e52a40e76baf97a307c38f1a6ffe05c5,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50ebd4f90068611d5c0c2eafe3a7e1b4a0e88163f29af55dd0284113aec8522e,PodSandboxId:5b83cb3b60fd48edf78ae8b5c93f79f97a148c93e342a3654d1922137cd02fc1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709232465211642496,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-848791,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 79d3ca487c0ab7d16b95c0911752c3c9,},Annotations:map[string]string{io.kubernetes.container.hash: 35482996,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52e60b520e4df16de45d7213e50215a6cd3736d514def230d116774ba4b875f9,PodSandboxId:1252249eb3f9cb4129ef4c27e2fb358b775778cab89745243ee62925c26c3a46,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709232464897420495,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l2m9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41adf7f1-0c82-4136-a271
-819137db321b,},Annotations:map[string]string{io.kubernetes.container.hash: 971943ed,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c3841f85eb528d7a7eadf7119115c698c72e631581d922d2b823180d9fed894,PodSandboxId:e539b028e8a9d176940ad917461de7993d8c086422b6778b52c21f0e640c816e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709232465160034681,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-848791,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 048336b3943725df307a6a6dcf28ff99,},Annotations:map[string]string{io
.kubernetes.container.hash: 9b7688fe,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75d80eeba4737b9bd73fec56902d2fc18e6770ff5b7f6e9ee9c82acee198e7dc,PodSandboxId:8ac8a124adbd11fdd69633e3f4a64dfbb884729ec3d083426ea37ea367c2d0d7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1709232450816429952,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-h88pr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd96b56f-afb7-4472-b92a-2026983e58bd,},Annotations:map[string]string{io.kubernetes.container.hash: dccf
f3c2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0dc0e68d780b68bbe86835c32ac6de7d8024a47c70a97609bcabe11af0b5c75d,PodSandboxId:ee2ecf3c9f0d1e4c06552ff7a5c0e154d561deff4e2f90956b073d4f234810a5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1709232450264796874,Labels:map[string]string{io.kubernetes.container.name: kube-controlle
r-manager,io.kubernetes.pod.name: kube-controller-manager-pause-848791,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6eb35ebc21e5130b09eb73823cab2d15,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40864953bcc58e61ed4476305a2f44e9ed90ebc42bc9e7c965e252e5fd1d64be,PodSandboxId:63df0d4c45c21cc7cd3952105dc85270df81d19a2a6116e9d5c3a43b3c41d9dc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1709232450073822218,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.na
me: etcd-pause-848791,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 048336b3943725df307a6a6dcf28ff99,},Annotations:map[string]string{io.kubernetes.container.hash: 9b7688fe,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0293df94c7faabe36e668dea3a4c280fc5c47f3684ca2610f72a365d980c587d,PodSandboxId:e9c962de8f376e49e0d838257e1233645c3d925aee2f7c4c14110b247eedddce,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1709232450177858530,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-8487
91,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79d3ca487c0ab7d16b95c0911752c3c9,},Annotations:map[string]string{io.kubernetes.container.hash: 35482996,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1698055d49d7942b06c62f78ab6d58bfe5a511ec064ec35566d3626dab70f969,PodSandboxId:bf1c2ad33f3b913ad156869f92db2b8b8ced421cf284518812588d3607e2f625,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1709232450104417653,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-848791,io.kubernetes.po
d.namespace: kube-system,io.kubernetes.pod.uid: e52a40e76baf97a307c38f1a6ffe05c5,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcaeddb617b386f721fdbd313347a4c765b8337499ef9ddbc68ce341569f2fcf,PodSandboxId:179a79775293c5b3bbc399bcc74306563e986a623e4707b8ede21bc21efa9973,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1709232450029518679,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l2m9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 41adf7f1-0c82-4136-a271-819137db321b,},Annotations:map[string]string{io.kubernetes.container.hash: 971943ed,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c0adba4f-16af-467e-9532-ebe66ee2fb97 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	cbe3e63c84bcd       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   7 seconds ago       Running             coredns                   2                   d6910a43eebae       coredns-5dd5756b68-h88pr
	996c3c479f668       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   23 seconds ago      Running             kube-scheduler            2                   3bfd52160851e       kube-scheduler-pause-848791
	50ebd4f900686       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   23 seconds ago      Running             kube-apiserver            2                   5b83cb3b60fd4       kube-apiserver-pause-848791
	aa631b99c5f81       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   23 seconds ago      Running             kube-controller-manager   2                   33bacdf5d1018       kube-controller-manager-pause-848791
	6c3841f85eb52       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   23 seconds ago      Running             etcd                      2                   e539b028e8a9d       etcd-pause-848791
	52e60b520e4df       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   24 seconds ago      Running             kube-proxy                2                   1252249eb3f9c       kube-proxy-l2m9f
	75d80eeba4737       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   38 seconds ago      Exited              coredns                   1                   8ac8a124adbd1       coredns-5dd5756b68-h88pr
	0dc0e68d780b6       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   38 seconds ago      Exited              kube-controller-manager   1                   ee2ecf3c9f0d1       kube-controller-manager-pause-848791
	0293df94c7faa       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   38 seconds ago      Exited              kube-apiserver            1                   e9c962de8f376       kube-apiserver-pause-848791
	1698055d49d79       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   38 seconds ago      Exited              kube-scheduler            1                   bf1c2ad33f3b9       kube-scheduler-pause-848791
	40864953bcc58       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   38 seconds ago      Exited              etcd                      1                   63df0d4c45c21       etcd-pause-848791
	fcaeddb617b38       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   38 seconds ago      Exited              kube-proxy                1                   179a79775293c       kube-proxy-l2m9f
	
	
	==> coredns [75d80eeba4737b9bd73fec56902d2fc18e6770ff5b7f6e9ee9c82acee198e7dc] <==
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:44809 - 31460 "HINFO IN 8728867127159481112.4838692107368353272. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013671482s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> coredns [cbe3e63c84bcdfc2a9e32f8b51b9563e5c2ba10cdb986e775bae6f10e977eb65] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:50176 - 62743 "HINFO IN 8729442428919199151.538908865851346355. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.010574786s
	
	
	==> describe nodes <==
	Name:               pause-848791
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-848791
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f9e09574fdbf0719156a1e892f7aeb8b71f0cf19
	                    minikube.k8s.io/name=pause-848791
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_29T18_45_59_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Feb 2024 18:45:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-848791
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Feb 2024 18:48:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Feb 2024 18:46:20 +0000   Thu, 29 Feb 2024 18:45:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Feb 2024 18:46:20 +0000   Thu, 29 Feb 2024 18:45:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Feb 2024 18:46:20 +0000   Thu, 29 Feb 2024 18:45:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Feb 2024 18:46:20 +0000   Thu, 29 Feb 2024 18:46:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.95
	  Hostname:    pause-848791
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015708Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015708Ki
	  pods:               110
	System Info:
	  Machine ID:                 7c3286c60203476bb89ba13e3695c75b
	  System UUID:                7c3286c6-0203-476b-b89b-a13e3695c75b
	  Boot ID:                    cb1b3dd6-faec-484b-ac50-a4318b0903ee
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-h88pr                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     115s
	  kube-system                 etcd-pause-848791                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         2m9s
	  kube-system                 kube-apiserver-pause-848791             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m11s
	  kube-system                 kube-controller-manager-pause-848791    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m9s
	  kube-system                 kube-proxy-l2m9f                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         115s
	  kube-system                 kube-scheduler-pause-848791             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 112s   kube-proxy       
	  Normal  Starting                 20s    kube-proxy       
	  Normal  Starting                 2m10s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m10s  kubelet          Node pause-848791 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m10s  kubelet          Node pause-848791 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m10s  kubelet          Node pause-848791 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m9s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m9s   kubelet          Node pause-848791 status is now: NodeReady
	  Normal  RegisteredNode           117s   node-controller  Node pause-848791 event: Registered Node pause-848791 in Controller
	  Normal  RegisteredNode           8s     node-controller  Node pause-848791 event: Registered Node pause-848791 in Controller
	
	
	==> dmesg <==
	[  +0.042953] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.726910] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.308707] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +4.698720] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.397140] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.067578] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060350] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.176233] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.150206] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.295067] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +9.839324] systemd-fstab-generator[884]: Ignoring "noauto" option for root device
	[  +0.060434] kauditd_printk_skb: 130 callbacks suppressed
	[  +7.725396] systemd-fstab-generator[1218]: Ignoring "noauto" option for root device
	[  +0.079721] kauditd_printk_skb: 69 callbacks suppressed
	[Feb29 18:46] kauditd_printk_skb: 21 callbacks suppressed
	[ +40.015519] kauditd_printk_skb: 39 callbacks suppressed
	[Feb29 18:47] systemd-fstab-generator[2001]: Ignoring "noauto" option for root device
	[  +0.242061] systemd-fstab-generator[2030]: Ignoring "noauto" option for root device
	[  +0.710405] systemd-fstab-generator[2278]: Ignoring "noauto" option for root device
	[  +0.393218] systemd-fstab-generator[2405]: Ignoring "noauto" option for root device
	[  +0.449988] systemd-fstab-generator[2525]: Ignoring "noauto" option for root device
	[ +13.534980] kauditd_printk_skb: 169 callbacks suppressed
	[ +15.262820] kauditd_printk_skb: 62 callbacks suppressed
	
	
	==> etcd [40864953bcc58e61ed4476305a2f44e9ed90ebc42bc9e7c965e252e5fd1d64be] <==
	{"level":"warn","ts":"2024-02-29T18:47:31.110405Z","caller":"embed/config.go:673","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-02-29T18:47:31.110726Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.72.95:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://192.168.72.95:2380","--initial-cluster=pause-848791=https://192.168.72.95:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.72.95:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.72.95:2380","--name=pause-848791","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-c
a-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	{"level":"info","ts":"2024-02-29T18:47:31.113734Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	{"level":"warn","ts":"2024-02-29T18:47:31.113833Z","caller":"embed/config.go:673","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-02-29T18:47:31.113871Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.72.95:2380"]}
	{"level":"info","ts":"2024-02-29T18:47:31.113941Z","caller":"embed/etcd.go:495","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-02-29T18:47:31.11466Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.95:2379"]}
	{"level":"info","ts":"2024-02-29T18:47:31.116771Z","caller":"embed/etcd.go:309","msg":"starting an etcd server","etcd-version":"3.5.9","git-sha":"bdbbde998","go-version":"go1.19.9","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"pause-848791","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.72.95:2380"],"listen-peer-urls":["https://192.168.72.95:2380"],"advertise-client-urls":["https://192.168.72.95:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.95:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-cluster-
token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	{"level":"info","ts":"2024-02-29T18:47:31.127451Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"10.430729ms"}
	{"level":"info","ts":"2024-02-29T18:47:31.138864Z","caller":"etcdserver/server.go:530","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-02-29T18:47:31.187353Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"52f62c7071e5a955","local-member-id":"96f44c7526de935a","commit-index":427}
	{"level":"info","ts":"2024-02-29T18:47:31.191483Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"96f44c7526de935a switched to configuration voters=()"}
	{"level":"info","ts":"2024-02-29T18:47:31.192887Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"96f44c7526de935a became follower at term 2"}
	{"level":"info","ts":"2024-02-29T18:47:31.19328Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 96f44c7526de935a [peers: [], term: 2, commit: 427, applied: 0, lastindex: 427, lastterm: 2]"}
	{"level":"warn","ts":"2024-02-29T18:47:31.20201Z","caller":"auth/store.go:1238","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-02-29T18:47:31.231815Z","caller":"mvcc/kvstore.go:393","msg":"kvstore restored","current-rev":397}
	{"level":"info","ts":"2024-02-29T18:47:31.24852Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-02-29T18:47:31.254523Z","caller":"etcdserver/corrupt.go:95","msg":"starting initial corruption check","local-member-id":"96f44c7526de935a","timeout":"7s"}
	{"level":"info","ts":"2024-02-29T18:47:31.255247Z","caller":"etcdserver/corrupt.go:165","msg":"initial corruption checking passed; no corruption","local-member-id":"96f44c7526de935a"}
	{"level":"info","ts":"2024-02-29T18:47:31.255501Z","caller":"etcdserver/server.go:854","msg":"starting etcd server","local-member-id":"96f44c7526de935a","local-server-version":"3.5.9","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-02-29T18:47:31.259613Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	
	
	==> etcd [6c3841f85eb528d7a7eadf7119115c698c72e631581d922d2b823180d9fed894] <==
	{"level":"info","ts":"2024-02-29T18:47:45.568277Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.72.95:2380"}
	{"level":"info","ts":"2024-02-29T18:47:45.568812Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"96f44c7526de935a","initial-advertise-peer-urls":["https://192.168.72.95:2380"],"listen-peer-urls":["https://192.168.72.95:2380"],"advertise-client-urls":["https://192.168.72.95:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.95:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-02-29T18:47:45.569094Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-02-29T18:47:45.565765Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-02-29T18:47:45.565928Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-29T18:47:45.575924Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-29T18:47:45.575943Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-29T18:47:45.566211Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"96f44c7526de935a switched to configuration voters=(10877403066053595994)"}
	{"level":"info","ts":"2024-02-29T18:47:45.576078Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"52f62c7071e5a955","local-member-id":"96f44c7526de935a","added-peer-id":"96f44c7526de935a","added-peer-peer-urls":["https://192.168.72.95:2380"]}
	{"level":"info","ts":"2024-02-29T18:47:45.576184Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"52f62c7071e5a955","local-member-id":"96f44c7526de935a","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T18:47:45.576215Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T18:47:47.036278Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"96f44c7526de935a is starting a new election at term 2"}
	{"level":"info","ts":"2024-02-29T18:47:47.036401Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"96f44c7526de935a became pre-candidate at term 2"}
	{"level":"info","ts":"2024-02-29T18:47:47.036448Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"96f44c7526de935a received MsgPreVoteResp from 96f44c7526de935a at term 2"}
	{"level":"info","ts":"2024-02-29T18:47:47.036487Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"96f44c7526de935a became candidate at term 3"}
	{"level":"info","ts":"2024-02-29T18:47:47.036497Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"96f44c7526de935a received MsgVoteResp from 96f44c7526de935a at term 3"}
	{"level":"info","ts":"2024-02-29T18:47:47.036511Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"96f44c7526de935a became leader at term 3"}
	{"level":"info","ts":"2024-02-29T18:47:47.036521Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 96f44c7526de935a elected leader 96f44c7526de935a at term 3"}
	{"level":"info","ts":"2024-02-29T18:47:47.038718Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"96f44c7526de935a","local-member-attributes":"{Name:pause-848791 ClientURLs:[https://192.168.72.95:2379]}","request-path":"/0/members/96f44c7526de935a/attributes","cluster-id":"52f62c7071e5a955","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-29T18:47:47.038788Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T18:47:47.039148Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-29T18:47:47.039227Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-29T18:47:47.038743Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T18:47:47.04124Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.95:2379"}
	{"level":"info","ts":"2024-02-29T18:47:47.041827Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 18:48:09 up 2 min,  0 users,  load average: 2.82, 1.02, 0.37
	Linux pause-848791 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [0293df94c7faabe36e668dea3a4c280fc5c47f3684ca2610f72a365d980c587d] <==
	I0229 18:47:30.956442       1 options.go:220] external host was not specified, using 192.168.72.95
	I0229 18:47:30.957827       1 server.go:148] Version: v1.28.4
	I0229 18:47:30.957870       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-apiserver [50ebd4f90068611d5c0c2eafe3a7e1b4a0e88163f29af55dd0284113aec8522e] <==
	I0229 18:47:48.684511       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0229 18:47:48.684670       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0229 18:47:48.684777       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0229 18:47:48.684873       1 available_controller.go:423] Starting AvailableConditionController
	I0229 18:47:48.684925       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0229 18:47:48.684964       1 controller.go:78] Starting OpenAPI AggregationController
	I0229 18:47:48.685103       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0229 18:47:48.685404       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0229 18:47:48.774738       1 shared_informer.go:318] Caches are synced for configmaps
	I0229 18:47:48.777150       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0229 18:47:48.780256       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0229 18:47:48.780362       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0229 18:47:48.780548       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0229 18:47:48.780738       1 aggregator.go:166] initial CRD sync complete...
	I0229 18:47:48.780754       1 autoregister_controller.go:141] Starting autoregister controller
	I0229 18:47:48.780759       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0229 18:47:48.780764       1 cache.go:39] Caches are synced for autoregister controller
	I0229 18:47:48.784884       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0229 18:47:48.790749       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0229 18:47:48.795688       1 cache.go:39] Caches are synced for AvailableConditionController controller
	E0229 18:47:48.807805       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0229 18:47:48.849064       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0229 18:47:49.681857       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0229 18:48:01.822202       1 controller.go:624] quota admission added evaluator for: endpoints
	I0229 18:48:01.875401       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [0dc0e68d780b68bbe86835c32ac6de7d8024a47c70a97609bcabe11af0b5c75d] <==
	
	
	==> kube-controller-manager [aa631b99c5f8167398dcd7410ce8a8ba4cebf7088379b12a7dffa5e5d6d12a58] <==
	I0229 18:48:01.847657       1 shared_informer.go:318] Caches are synced for service account
	I0229 18:48:01.850729       1 shared_informer.go:318] Caches are synced for deployment
	I0229 18:48:01.853662       1 shared_informer.go:318] Caches are synced for cronjob
	I0229 18:48:01.853690       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0229 18:48:01.857050       1 shared_informer.go:318] Caches are synced for daemon sets
	I0229 18:48:01.859728       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0229 18:48:01.860525       1 shared_informer.go:318] Caches are synced for persistent volume
	I0229 18:48:01.860855       1 shared_informer.go:318] Caches are synced for TTL
	I0229 18:48:01.860933       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0229 18:48:01.861031       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0229 18:48:01.862929       1 shared_informer.go:318] Caches are synced for job
	I0229 18:48:01.867372       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0229 18:48:01.894951       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="27.414569ms"
	I0229 18:48:01.897888       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="71.426µs"
	I0229 18:48:01.933353       1 shared_informer.go:318] Caches are synced for stateful set
	I0229 18:48:01.936978       1 shared_informer.go:318] Caches are synced for disruption
	I0229 18:48:01.980739       1 shared_informer.go:318] Caches are synced for HPA
	I0229 18:48:02.042546       1 shared_informer.go:318] Caches are synced for resource quota
	I0229 18:48:02.042767       1 shared_informer.go:318] Caches are synced for resource quota
	I0229 18:48:02.398778       1 shared_informer.go:318] Caches are synced for garbage collector
	I0229 18:48:02.409525       1 shared_informer.go:318] Caches are synced for garbage collector
	I0229 18:48:02.409739       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0229 18:48:03.085531       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="151.675µs"
	I0229 18:48:03.110795       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="10.788027ms"
	I0229 18:48:03.110896       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="46.77µs"
	
	
	==> kube-proxy [52e60b520e4df16de45d7213e50215a6cd3736d514def230d116774ba4b875f9] <==
	I0229 18:47:46.430435       1 server_others.go:69] "Using iptables proxy"
	I0229 18:47:48.810081       1 node.go:141] Successfully retrieved node IP: 192.168.72.95
	I0229 18:47:48.901243       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0229 18:47:48.901317       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0229 18:47:48.904298       1 server_others.go:152] "Using iptables Proxier"
	I0229 18:47:48.904409       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0229 18:47:48.904811       1 server.go:846] "Version info" version="v1.28.4"
	I0229 18:47:48.904857       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0229 18:47:48.905948       1 config.go:188] "Starting service config controller"
	I0229 18:47:48.906024       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0229 18:47:48.906062       1 config.go:97] "Starting endpoint slice config controller"
	I0229 18:47:48.906105       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0229 18:47:48.906785       1 config.go:315] "Starting node config controller"
	I0229 18:47:48.906820       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0229 18:47:49.007150       1 shared_informer.go:318] Caches are synced for node config
	I0229 18:47:49.007216       1 shared_informer.go:318] Caches are synced for service config
	I0229 18:47:49.007248       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [fcaeddb617b386f721fdbd313347a4c765b8337499ef9ddbc68ce341569f2fcf] <==
	
	
	==> kube-scheduler [1698055d49d7942b06c62f78ab6d58bfe5a511ec064ec35566d3626dab70f969] <==
	
	
	==> kube-scheduler [996c3c479f668777630d386ba70c091466ebe2d267160ed282e9361696cbffcf] <==
	I0229 18:47:46.367536       1 serving.go:348] Generated self-signed cert in-memory
	W0229 18:47:48.759288       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0229 18:47:48.759483       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0229 18:47:48.759814       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0229 18:47:48.759944       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0229 18:47:48.809299       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0229 18:47:48.810968       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0229 18:47:48.813264       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0229 18:47:48.813676       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0229 18:47:48.813856       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0229 18:47:48.814125       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0229 18:47:48.914676       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 29 18:47:45 pause-848791 kubelet[1225]: I0229 18:47:45.846359    1225 status_manager.go:853] "Failed to get status for pod" podUID="41adf7f1-0c82-4136-a271-819137db321b" pod="kube-system/kube-proxy-l2m9f" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-proxy-l2m9f\": dial tcp 192.168.72.95:8443: connect: connection refused"
	Feb 29 18:47:45 pause-848791 kubelet[1225]: I0229 18:47:45.849845    1225 scope.go:117] "RemoveContainer" containerID="75d80eeba4737b9bd73fec56902d2fc18e6770ff5b7f6e9ee9c82acee198e7dc"
	Feb 29 18:47:45 pause-848791 kubelet[1225]: E0229 18:47:45.850772    1225 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-5dd5756b68-h88pr_kube-system(dd96b56f-afb7-4472-b92a-2026983e58bd)\"" pod="kube-system/coredns-5dd5756b68-h88pr" podUID="dd96b56f-afb7-4472-b92a-2026983e58bd"
	Feb 29 18:47:45 pause-848791 kubelet[1225]: I0229 18:47:45.850918    1225 status_manager.go:853] "Failed to get status for pod" podUID="41adf7f1-0c82-4136-a271-819137db321b" pod="kube-system/kube-proxy-l2m9f" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-proxy-l2m9f\": dial tcp 192.168.72.95:8443: connect: connection refused"
	Feb 29 18:47:45 pause-848791 kubelet[1225]: I0229 18:47:45.859214    1225 status_manager.go:853] "Failed to get status for pod" podUID="dd96b56f-afb7-4472-b92a-2026983e58bd" pod="kube-system/coredns-5dd5756b68-h88pr" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-h88pr\": dial tcp 192.168.72.95:8443: connect: connection refused"
	Feb 29 18:47:45 pause-848791 kubelet[1225]: I0229 18:47:45.859888    1225 status_manager.go:853] "Failed to get status for pod" podUID="6eb35ebc21e5130b09eb73823cab2d15" pod="kube-system/kube-controller-manager-pause-848791" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-848791\": dial tcp 192.168.72.95:8443: connect: connection refused"
	Feb 29 18:47:45 pause-848791 kubelet[1225]: I0229 18:47:45.879360    1225 status_manager.go:853] "Failed to get status for pod" podUID="e52a40e76baf97a307c38f1a6ffe05c5" pod="kube-system/kube-scheduler-pause-848791" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-848791\": dial tcp 192.168.72.95:8443: connect: connection refused"
	Feb 29 18:47:45 pause-848791 kubelet[1225]: I0229 18:47:45.883067    1225 status_manager.go:853] "Failed to get status for pod" podUID="048336b3943725df307a6a6dcf28ff99" pod="kube-system/etcd-pause-848791" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-pause-848791\": dial tcp 192.168.72.95:8443: connect: connection refused"
	Feb 29 18:47:45 pause-848791 kubelet[1225]: I0229 18:47:45.883834    1225 status_manager.go:853] "Failed to get status for pod" podUID="79d3ca487c0ab7d16b95c0911752c3c9" pod="kube-system/kube-apiserver-pause-848791" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-848791\": dial tcp 192.168.72.95:8443: connect: connection refused"
	Feb 29 18:47:45 pause-848791 kubelet[1225]: I0229 18:47:45.884751    1225 status_manager.go:853] "Failed to get status for pod" podUID="dd96b56f-afb7-4472-b92a-2026983e58bd" pod="kube-system/coredns-5dd5756b68-h88pr" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-h88pr\": dial tcp 192.168.72.95:8443: connect: connection refused"
	Feb 29 18:47:45 pause-848791 kubelet[1225]: I0229 18:47:45.885316    1225 status_manager.go:853] "Failed to get status for pod" podUID="6eb35ebc21e5130b09eb73823cab2d15" pod="kube-system/kube-controller-manager-pause-848791" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-848791\": dial tcp 192.168.72.95:8443: connect: connection refused"
	Feb 29 18:47:45 pause-848791 kubelet[1225]: I0229 18:47:45.886396    1225 status_manager.go:853] "Failed to get status for pod" podUID="e52a40e76baf97a307c38f1a6ffe05c5" pod="kube-system/kube-scheduler-pause-848791" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-848791\": dial tcp 192.168.72.95:8443: connect: connection refused"
	Feb 29 18:47:45 pause-848791 kubelet[1225]: I0229 18:47:45.887028    1225 status_manager.go:853] "Failed to get status for pod" podUID="048336b3943725df307a6a6dcf28ff99" pod="kube-system/etcd-pause-848791" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-pause-848791\": dial tcp 192.168.72.95:8443: connect: connection refused"
	Feb 29 18:47:45 pause-848791 kubelet[1225]: I0229 18:47:45.887710    1225 status_manager.go:853] "Failed to get status for pod" podUID="79d3ca487c0ab7d16b95c0911752c3c9" pod="kube-system/kube-apiserver-pause-848791" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-848791\": dial tcp 192.168.72.95:8443: connect: connection refused"
	Feb 29 18:47:45 pause-848791 kubelet[1225]: I0229 18:47:45.893844    1225 status_manager.go:853] "Failed to get status for pod" podUID="41adf7f1-0c82-4136-a271-819137db321b" pod="kube-system/kube-proxy-l2m9f" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-proxy-l2m9f\": dial tcp 192.168.72.95:8443: connect: connection refused"
	Feb 29 18:47:46 pause-848791 kubelet[1225]: I0229 18:47:46.888287    1225 scope.go:117] "RemoveContainer" containerID="75d80eeba4737b9bd73fec56902d2fc18e6770ff5b7f6e9ee9c82acee198e7dc"
	Feb 29 18:47:46 pause-848791 kubelet[1225]: E0229 18:47:46.889412    1225 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-5dd5756b68-h88pr_kube-system(dd96b56f-afb7-4472-b92a-2026983e58bd)\"" pod="kube-system/coredns-5dd5756b68-h88pr" podUID="dd96b56f-afb7-4472-b92a-2026983e58bd"
	Feb 29 18:47:47 pause-848791 kubelet[1225]: I0229 18:47:47.887301    1225 scope.go:117] "RemoveContainer" containerID="75d80eeba4737b9bd73fec56902d2fc18e6770ff5b7f6e9ee9c82acee198e7dc"
	Feb 29 18:47:47 pause-848791 kubelet[1225]: E0229 18:47:47.887665    1225 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-5dd5756b68-h88pr_kube-system(dd96b56f-afb7-4472-b92a-2026983e58bd)\"" pod="kube-system/coredns-5dd5756b68-h88pr" podUID="dd96b56f-afb7-4472-b92a-2026983e58bd"
	Feb 29 18:48:00 pause-848791 kubelet[1225]: E0229 18:48:00.039868    1225 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 29 18:48:00 pause-848791 kubelet[1225]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 29 18:48:00 pause-848791 kubelet[1225]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 29 18:48:00 pause-848791 kubelet[1225]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 29 18:48:00 pause-848791 kubelet[1225]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 29 18:48:01 pause-848791 kubelet[1225]: I0229 18:48:01.926882    1225 scope.go:117] "RemoveContainer" containerID="75d80eeba4737b9bd73fec56902d2fc18e6770ff5b7f6e9ee9c82acee198e7dc"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-848791 -n pause-848791
helpers_test.go:261: (dbg) Run:  kubectl --context pause-848791 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (72.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (138.78s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-247197 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-247197 --alsologtostderr -v=3: exit status 82 (2m0.319467197s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-247197"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 18:49:56.716678   46528 out.go:291] Setting OutFile to fd 1 ...
	I0229 18:49:56.716830   46528 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:49:56.716839   46528 out.go:304] Setting ErrFile to fd 2...
	I0229 18:49:56.716843   46528 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:49:56.717135   46528 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6428/.minikube/bin
	I0229 18:49:56.720251   46528 out.go:298] Setting JSON to false
	I0229 18:49:56.720339   46528 mustload.go:65] Loading cluster: no-preload-247197
	I0229 18:49:56.720908   46528 config.go:182] Loaded profile config "no-preload-247197": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0229 18:49:56.720992   46528 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/no-preload-247197/config.json ...
	I0229 18:49:56.721207   46528 mustload.go:65] Loading cluster: no-preload-247197
	I0229 18:49:56.721334   46528 config.go:182] Loaded profile config "no-preload-247197": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0229 18:49:56.721365   46528 stop.go:39] StopHost: no-preload-247197
	I0229 18:49:56.721875   46528 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 18:49:56.721931   46528 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:49:56.737939   46528 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37609
	I0229 18:49:56.738457   46528 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:49:56.739176   46528 main.go:141] libmachine: Using API Version  1
	I0229 18:49:56.739201   46528 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:49:56.739609   46528 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:49:56.742244   46528 out.go:177] * Stopping node "no-preload-247197"  ...
	I0229 18:49:56.743663   46528 main.go:141] libmachine: Stopping "no-preload-247197"...
	I0229 18:49:56.743709   46528 main.go:141] libmachine: (no-preload-247197) Calling .GetState
	I0229 18:49:56.747120   46528 main.go:141] libmachine: (no-preload-247197) Calling .Stop
	I0229 18:49:56.750231   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 0/120
	I0229 18:49:57.751986   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 1/120
	I0229 18:49:58.753493   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 2/120
	I0229 18:49:59.755251   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 3/120
	I0229 18:50:00.757720   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 4/120
	I0229 18:50:01.759241   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 5/120
	I0229 18:50:02.760556   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 6/120
	I0229 18:50:03.761945   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 7/120
	I0229 18:50:04.764239   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 8/120
	I0229 18:50:05.765532   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 9/120
	I0229 18:50:06.767725   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 10/120
	I0229 18:50:07.768940   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 11/120
	I0229 18:50:08.770287   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 12/120
	I0229 18:50:09.771499   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 13/120
	I0229 18:50:10.772851   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 14/120
	I0229 18:50:11.775745   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 15/120
	I0229 18:50:12.777568   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 16/120
	I0229 18:50:13.778782   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 17/120
	I0229 18:50:14.780188   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 18/120
	I0229 18:50:15.781720   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 19/120
	I0229 18:50:16.783592   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 20/120
	I0229 18:50:17.785444   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 21/120
	I0229 18:50:18.787206   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 22/120
	I0229 18:50:19.789279   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 23/120
	I0229 18:50:20.790761   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 24/120
	I0229 18:50:21.792874   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 25/120
	I0229 18:50:22.794202   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 26/120
	I0229 18:50:23.795455   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 27/120
	I0229 18:50:24.796739   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 28/120
	I0229 18:50:25.798254   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 29/120
	I0229 18:50:26.799780   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 30/120
	I0229 18:50:27.801183   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 31/120
	I0229 18:50:28.802523   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 32/120
	I0229 18:50:29.804143   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 33/120
	I0229 18:50:30.805454   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 34/120
	I0229 18:50:31.807299   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 35/120
	I0229 18:50:32.808514   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 36/120
	I0229 18:50:33.809894   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 37/120
	I0229 18:50:34.811145   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 38/120
	I0229 18:50:35.812802   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 39/120
	I0229 18:50:36.814648   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 40/120
	I0229 18:50:37.816006   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 41/120
	I0229 18:50:38.817501   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 42/120
	I0229 18:50:39.818874   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 43/120
	I0229 18:50:40.820235   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 44/120
	I0229 18:50:41.822250   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 45/120
	I0229 18:50:42.824088   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 46/120
	I0229 18:50:43.825478   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 47/120
	I0229 18:50:44.827031   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 48/120
	I0229 18:50:45.828318   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 49/120
	I0229 18:50:46.830309   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 50/120
	I0229 18:50:47.831702   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 51/120
	I0229 18:50:48.833187   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 52/120
	I0229 18:50:49.834704   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 53/120
	I0229 18:50:50.836164   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 54/120
	I0229 18:50:51.838213   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 55/120
	I0229 18:50:52.839723   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 56/120
	I0229 18:50:53.841659   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 57/120
	I0229 18:50:54.843147   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 58/120
	I0229 18:50:55.845519   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 59/120
	I0229 18:50:56.847510   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 60/120
	I0229 18:50:57.849019   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 61/120
	I0229 18:50:58.850617   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 62/120
	I0229 18:50:59.852000   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 63/120
	I0229 18:51:00.853487   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 64/120
	I0229 18:51:01.855597   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 65/120
	I0229 18:51:02.857014   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 66/120
	I0229 18:51:03.858360   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 67/120
	I0229 18:51:04.859623   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 68/120
	I0229 18:51:05.861543   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 69/120
	I0229 18:51:06.863746   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 70/120
	I0229 18:51:07.865645   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 71/120
	I0229 18:51:08.866877   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 72/120
	I0229 18:51:09.868108   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 73/120
	I0229 18:51:10.869341   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 74/120
	I0229 18:51:11.871070   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 75/120
	I0229 18:51:12.872570   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 76/120
	I0229 18:51:13.874060   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 77/120
	I0229 18:51:14.875413   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 78/120
	I0229 18:51:15.876852   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 79/120
	I0229 18:51:16.878887   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 80/120
	I0229 18:51:17.880261   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 81/120
	I0229 18:51:18.881697   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 82/120
	I0229 18:51:19.883252   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 83/120
	I0229 18:51:20.884644   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 84/120
	I0229 18:51:21.886475   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 85/120
	I0229 18:51:22.888010   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 86/120
	I0229 18:51:23.889256   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 87/120
	I0229 18:51:24.890563   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 88/120
	I0229 18:51:25.892481   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 89/120
	I0229 18:51:26.894643   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 90/120
	I0229 18:51:27.896028   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 91/120
	I0229 18:51:28.897542   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 92/120
	I0229 18:51:29.898868   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 93/120
	I0229 18:51:30.900252   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 94/120
	I0229 18:51:31.902319   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 95/120
	I0229 18:51:32.903694   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 96/120
	I0229 18:51:33.905093   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 97/120
	I0229 18:51:34.906486   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 98/120
	I0229 18:51:35.907633   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 99/120
	I0229 18:51:36.909853   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 100/120
	I0229 18:51:37.911410   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 101/120
	I0229 18:51:38.912698   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 102/120
	I0229 18:51:39.914009   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 103/120
	I0229 18:51:40.915403   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 104/120
	I0229 18:51:41.917392   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 105/120
	I0229 18:51:42.918653   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 106/120
	I0229 18:51:43.920735   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 107/120
	I0229 18:51:44.922084   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 108/120
	I0229 18:51:45.923362   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 109/120
	I0229 18:51:46.925489   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 110/120
	I0229 18:51:47.927017   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 111/120
	I0229 18:51:48.928417   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 112/120
	I0229 18:51:49.929826   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 113/120
	I0229 18:51:50.931225   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 114/120
	I0229 18:51:51.933244   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 115/120
	I0229 18:51:52.934859   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 116/120
	I0229 18:51:53.937142   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 117/120
	I0229 18:51:54.938348   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 118/120
	I0229 18:51:55.939758   46528 main.go:141] libmachine: (no-preload-247197) Waiting for machine to stop 119/120
	I0229 18:51:56.940980   46528 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0229 18:51:56.941041   46528 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0229 18:51:56.942922   46528 out.go:177] 
	W0229 18:51:56.944545   46528 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0229 18:51:56.944565   46528 out.go:239] * 
	* 
	W0229 18:51:56.946909   46528 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 18:51:56.948225   46528 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-247197 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-247197 -n no-preload-247197
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-247197 -n no-preload-247197: exit status 3 (18.462843335s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 18:52:15.411381   47262 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.72:22: connect: no route to host
	E0229 18:52:15.411401   47262 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.72:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-247197" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (138.78s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (138.74s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-991128 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-991128 --alsologtostderr -v=3: exit status 82 (2m0.280665667s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-991128"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 18:50:04.151564   46633 out.go:291] Setting OutFile to fd 1 ...
	I0229 18:50:04.151682   46633 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:50:04.151693   46633 out.go:304] Setting ErrFile to fd 2...
	I0229 18:50:04.151699   46633 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:50:04.151915   46633 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6428/.minikube/bin
	I0229 18:50:04.152207   46633 out.go:298] Setting JSON to false
	I0229 18:50:04.152297   46633 mustload.go:65] Loading cluster: embed-certs-991128
	I0229 18:50:04.152628   46633 config.go:182] Loaded profile config "embed-certs-991128": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 18:50:04.152713   46633 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/embed-certs-991128/config.json ...
	I0229 18:50:04.152893   46633 mustload.go:65] Loading cluster: embed-certs-991128
	I0229 18:50:04.153027   46633 config.go:182] Loaded profile config "embed-certs-991128": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 18:50:04.153072   46633 stop.go:39] StopHost: embed-certs-991128
	I0229 18:50:04.153497   46633 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 18:50:04.153550   46633 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:50:04.167785   46633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33017
	I0229 18:50:04.168431   46633 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:50:04.169136   46633 main.go:141] libmachine: Using API Version  1
	I0229 18:50:04.169160   46633 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:50:04.169514   46633 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:50:04.171809   46633 out.go:177] * Stopping node "embed-certs-991128"  ...
	I0229 18:50:04.173308   46633 main.go:141] libmachine: Stopping "embed-certs-991128"...
	I0229 18:50:04.173335   46633 main.go:141] libmachine: (embed-certs-991128) Calling .GetState
	I0229 18:50:04.174818   46633 main.go:141] libmachine: (embed-certs-991128) Calling .Stop
	I0229 18:50:04.178303   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 0/120
	I0229 18:50:05.179894   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 1/120
	I0229 18:50:06.181311   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 2/120
	I0229 18:50:07.183339   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 3/120
	I0229 18:50:08.185598   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 4/120
	I0229 18:50:09.187415   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 5/120
	I0229 18:50:10.189617   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 6/120
	I0229 18:50:11.190851   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 7/120
	I0229 18:50:12.192262   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 8/120
	I0229 18:50:13.193763   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 9/120
	I0229 18:50:14.196370   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 10/120
	I0229 18:50:15.199007   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 11/120
	I0229 18:50:16.200514   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 12/120
	I0229 18:50:17.202160   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 13/120
	I0229 18:50:18.204196   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 14/120
	I0229 18:50:19.205694   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 15/120
	I0229 18:50:20.207247   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 16/120
	I0229 18:50:21.209490   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 17/120
	I0229 18:50:22.211835   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 18/120
	I0229 18:50:23.213213   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 19/120
	I0229 18:50:24.215048   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 20/120
	I0229 18:50:25.216168   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 21/120
	I0229 18:50:26.217488   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 22/120
	I0229 18:50:27.218735   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 23/120
	I0229 18:50:28.220108   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 24/120
	I0229 18:50:29.221947   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 25/120
	I0229 18:50:30.223355   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 26/120
	I0229 18:50:31.225610   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 27/120
	I0229 18:50:32.227127   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 28/120
	I0229 18:50:33.228281   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 29/120
	I0229 18:50:34.230201   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 30/120
	I0229 18:50:35.231377   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 31/120
	I0229 18:50:36.232785   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 32/120
	I0229 18:50:37.234038   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 33/120
	I0229 18:50:38.235387   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 34/120
	I0229 18:50:39.237315   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 35/120
	I0229 18:50:40.239322   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 36/120
	I0229 18:50:41.240556   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 37/120
	I0229 18:50:42.242748   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 38/120
	I0229 18:50:43.244240   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 39/120
	I0229 18:50:44.246640   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 40/120
	I0229 18:50:45.248244   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 41/120
	I0229 18:50:46.250491   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 42/120
	I0229 18:50:47.252053   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 43/120
	I0229 18:50:48.253484   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 44/120
	I0229 18:50:49.255090   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 45/120
	I0229 18:50:50.257269   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 46/120
	I0229 18:50:51.258660   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 47/120
	I0229 18:50:52.260788   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 48/120
	I0229 18:50:53.261973   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 49/120
	I0229 18:50:54.264185   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 50/120
	I0229 18:50:55.265777   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 51/120
	I0229 18:50:56.267340   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 52/120
	I0229 18:50:57.268705   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 53/120
	I0229 18:50:58.270480   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 54/120
	I0229 18:50:59.272484   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 55/120
	I0229 18:51:00.273835   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 56/120
	I0229 18:51:01.275058   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 57/120
	I0229 18:51:02.276363   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 58/120
	I0229 18:51:03.277554   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 59/120
	I0229 18:51:04.279759   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 60/120
	I0229 18:51:05.281117   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 61/120
	I0229 18:51:06.282491   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 62/120
	I0229 18:51:07.283889   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 63/120
	I0229 18:51:08.285367   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 64/120
	I0229 18:51:09.287242   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 65/120
	I0229 18:51:10.288631   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 66/120
	I0229 18:51:11.289841   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 67/120
	I0229 18:51:12.291178   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 68/120
	I0229 18:51:13.292538   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 69/120
	I0229 18:51:14.294589   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 70/120
	I0229 18:51:15.295951   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 71/120
	I0229 18:51:16.297268   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 72/120
	I0229 18:51:17.298679   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 73/120
	I0229 18:51:18.299922   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 74/120
	I0229 18:51:19.301903   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 75/120
	I0229 18:51:20.303151   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 76/120
	I0229 18:51:21.304480   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 77/120
	I0229 18:51:22.305719   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 78/120
	I0229 18:51:23.307133   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 79/120
	I0229 18:51:24.309345   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 80/120
	I0229 18:51:25.310316   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 81/120
	I0229 18:51:26.311813   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 82/120
	I0229 18:51:27.313347   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 83/120
	I0229 18:51:28.314770   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 84/120
	I0229 18:51:29.317149   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 85/120
	I0229 18:51:30.318587   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 86/120
	I0229 18:51:31.320107   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 87/120
	I0229 18:51:32.321431   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 88/120
	I0229 18:51:33.322765   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 89/120
	I0229 18:51:34.324851   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 90/120
	I0229 18:51:35.326208   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 91/120
	I0229 18:51:36.327615   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 92/120
	I0229 18:51:37.328982   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 93/120
	I0229 18:51:38.330309   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 94/120
	I0229 18:51:39.332205   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 95/120
	I0229 18:51:40.333526   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 96/120
	I0229 18:51:41.334944   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 97/120
	I0229 18:51:42.336258   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 98/120
	I0229 18:51:43.337597   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 99/120
	I0229 18:51:44.339760   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 100/120
	I0229 18:51:45.341207   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 101/120
	I0229 18:51:46.342613   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 102/120
	I0229 18:51:47.343912   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 103/120
	I0229 18:51:48.345471   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 104/120
	I0229 18:51:49.347553   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 105/120
	I0229 18:51:50.348943   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 106/120
	I0229 18:51:51.350224   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 107/120
	I0229 18:51:52.351699   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 108/120
	I0229 18:51:53.353213   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 109/120
	I0229 18:51:54.355315   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 110/120
	I0229 18:51:55.356438   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 111/120
	I0229 18:51:56.358175   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 112/120
	I0229 18:51:57.359294   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 113/120
	I0229 18:51:58.360851   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 114/120
	I0229 18:51:59.362808   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 115/120
	I0229 18:52:00.364326   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 116/120
	I0229 18:52:01.365584   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 117/120
	I0229 18:52:02.367198   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 118/120
	I0229 18:52:03.368688   46633 main.go:141] libmachine: (embed-certs-991128) Waiting for machine to stop 119/120
	I0229 18:52:04.369380   46633 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0229 18:52:04.369431   46633 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0229 18:52:04.371602   46633 out.go:177] 
	W0229 18:52:04.373244   46633 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0229 18:52:04.373257   46633 out.go:239] * 
	* 
	W0229 18:52:04.375450   46633 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 18:52:04.376819   46633 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-991128 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-991128 -n embed-certs-991128
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-991128 -n embed-certs-991128: exit status 3 (18.457088568s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 18:52:22.835309   47302 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.34:22: connect: no route to host
	E0229 18:52:22.835328   47302 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.34:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-991128" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (138.74s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (138.92s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-153528 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-153528 --alsologtostderr -v=3: exit status 82 (2m0.277981779s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-153528"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 18:51:07.975336   46973 out.go:291] Setting OutFile to fd 1 ...
	I0229 18:51:07.975614   46973 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:51:07.975628   46973 out.go:304] Setting ErrFile to fd 2...
	I0229 18:51:07.975635   46973 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:51:07.975925   46973 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6428/.minikube/bin
	I0229 18:51:07.976254   46973 out.go:298] Setting JSON to false
	I0229 18:51:07.976349   46973 mustload.go:65] Loading cluster: default-k8s-diff-port-153528
	I0229 18:51:07.976832   46973 config.go:182] Loaded profile config "default-k8s-diff-port-153528": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 18:51:07.976939   46973 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/default-k8s-diff-port-153528/config.json ...
	I0229 18:51:07.977158   46973 mustload.go:65] Loading cluster: default-k8s-diff-port-153528
	I0229 18:51:07.977324   46973 config.go:182] Loaded profile config "default-k8s-diff-port-153528": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 18:51:07.977362   46973 stop.go:39] StopHost: default-k8s-diff-port-153528
	I0229 18:51:07.977898   46973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 18:51:07.977956   46973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:51:07.992410   46973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36355
	I0229 18:51:07.992913   46973 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:51:07.993530   46973 main.go:141] libmachine: Using API Version  1
	I0229 18:51:07.993562   46973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:51:07.993918   46973 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:51:07.996080   46973 out.go:177] * Stopping node "default-k8s-diff-port-153528"  ...
	I0229 18:51:07.997178   46973 main.go:141] libmachine: Stopping "default-k8s-diff-port-153528"...
	I0229 18:51:07.997197   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetState
	I0229 18:51:07.998843   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .Stop
	I0229 18:51:08.002259   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 0/120
	I0229 18:51:09.003683   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 1/120
	I0229 18:51:10.004927   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 2/120
	I0229 18:51:11.006394   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 3/120
	I0229 18:51:12.007805   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 4/120
	I0229 18:51:13.009831   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 5/120
	I0229 18:51:14.011367   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 6/120
	I0229 18:51:15.013636   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 7/120
	I0229 18:51:16.014931   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 8/120
	I0229 18:51:17.016158   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 9/120
	I0229 18:51:18.018293   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 10/120
	I0229 18:51:19.019594   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 11/120
	I0229 18:51:20.020684   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 12/120
	I0229 18:51:21.021897   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 13/120
	I0229 18:51:22.023074   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 14/120
	I0229 18:51:23.024854   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 15/120
	I0229 18:51:24.026229   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 16/120
	I0229 18:51:25.027307   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 17/120
	I0229 18:51:26.029884   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 18/120
	I0229 18:51:27.031230   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 19/120
	I0229 18:51:28.033213   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 20/120
	I0229 18:51:29.034462   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 21/120
	I0229 18:51:30.035866   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 22/120
	I0229 18:51:31.037102   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 23/120
	I0229 18:51:32.038432   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 24/120
	I0229 18:51:33.040220   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 25/120
	I0229 18:51:34.041640   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 26/120
	I0229 18:51:35.043578   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 27/120
	I0229 18:51:36.044803   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 28/120
	I0229 18:51:37.046170   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 29/120
	I0229 18:51:38.048169   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 30/120
	I0229 18:51:39.049349   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 31/120
	I0229 18:51:40.050785   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 32/120
	I0229 18:51:41.051994   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 33/120
	I0229 18:51:42.053316   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 34/120
	I0229 18:51:43.055405   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 35/120
	I0229 18:51:44.056880   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 36/120
	I0229 18:51:45.058246   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 37/120
	I0229 18:51:46.059631   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 38/120
	I0229 18:51:47.060973   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 39/120
	I0229 18:51:48.062887   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 40/120
	I0229 18:51:49.064342   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 41/120
	I0229 18:51:50.065766   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 42/120
	I0229 18:51:51.067068   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 43/120
	I0229 18:51:52.068552   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 44/120
	I0229 18:51:53.070427   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 45/120
	I0229 18:51:54.072325   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 46/120
	I0229 18:51:55.073517   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 47/120
	I0229 18:51:56.074929   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 48/120
	I0229 18:51:57.075834   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 49/120
	I0229 18:51:58.077970   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 50/120
	I0229 18:51:59.079292   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 51/120
	I0229 18:52:00.080769   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 52/120
	I0229 18:52:01.082088   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 53/120
	I0229 18:52:02.083519   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 54/120
	I0229 18:52:03.085639   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 55/120
	I0229 18:52:04.087217   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 56/120
	I0229 18:52:05.088559   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 57/120
	I0229 18:52:06.089943   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 58/120
	I0229 18:52:07.091264   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 59/120
	I0229 18:52:08.093359   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 60/120
	I0229 18:52:09.094823   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 61/120
	I0229 18:52:10.096247   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 62/120
	I0229 18:52:11.097841   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 63/120
	I0229 18:52:12.099157   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 64/120
	I0229 18:52:13.101185   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 65/120
	I0229 18:52:14.102580   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 66/120
	I0229 18:52:15.103854   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 67/120
	I0229 18:52:16.105353   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 68/120
	I0229 18:52:17.106606   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 69/120
	I0229 18:52:18.108912   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 70/120
	I0229 18:52:19.110335   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 71/120
	I0229 18:52:20.111647   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 72/120
	I0229 18:52:21.113064   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 73/120
	I0229 18:52:22.114277   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 74/120
	I0229 18:52:23.115970   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 75/120
	I0229 18:52:24.117381   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 76/120
	I0229 18:52:25.118699   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 77/120
	I0229 18:52:26.119984   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 78/120
	I0229 18:52:27.121249   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 79/120
	I0229 18:52:28.123135   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 80/120
	I0229 18:52:29.124366   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 81/120
	I0229 18:52:30.125578   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 82/120
	I0229 18:52:31.126841   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 83/120
	I0229 18:52:32.128200   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 84/120
	I0229 18:52:33.130310   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 85/120
	I0229 18:52:34.131759   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 86/120
	I0229 18:52:35.133073   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 87/120
	I0229 18:52:36.134378   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 88/120
	I0229 18:52:37.135906   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 89/120
	I0229 18:52:38.137901   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 90/120
	I0229 18:52:39.139350   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 91/120
	I0229 18:52:40.140751   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 92/120
	I0229 18:52:41.142038   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 93/120
	I0229 18:52:42.143239   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 94/120
	I0229 18:52:43.144942   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 95/120
	I0229 18:52:44.146437   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 96/120
	I0229 18:52:45.147786   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 97/120
	I0229 18:52:46.149128   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 98/120
	I0229 18:52:47.150548   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 99/120
	I0229 18:52:48.152619   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 100/120
	I0229 18:52:49.154041   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 101/120
	I0229 18:52:50.155405   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 102/120
	I0229 18:52:51.156870   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 103/120
	I0229 18:52:52.158377   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 104/120
	I0229 18:52:53.160249   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 105/120
	I0229 18:52:54.161661   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 106/120
	I0229 18:52:55.163113   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 107/120
	I0229 18:52:56.164600   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 108/120
	I0229 18:52:57.165888   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 109/120
	I0229 18:52:58.167903   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 110/120
	I0229 18:52:59.169475   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 111/120
	I0229 18:53:00.171013   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 112/120
	I0229 18:53:01.172579   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 113/120
	I0229 18:53:02.174194   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 114/120
	I0229 18:53:03.176084   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 115/120
	I0229 18:53:04.177500   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 116/120
	I0229 18:53:05.178893   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 117/120
	I0229 18:53:06.180272   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 118/120
	I0229 18:53:07.181881   46973 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for machine to stop 119/120
	I0229 18:53:08.182580   46973 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0229 18:53:08.182639   46973 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0229 18:53:08.184914   46973 out.go:177] 
	W0229 18:53:08.186468   46973 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0229 18:53:08.186493   46973 out.go:239] * 
	* 
	W0229 18:53:08.188935   46973 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 18:53:08.190323   46973 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-153528 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-153528 -n default-k8s-diff-port-153528
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-153528 -n default-k8s-diff-port-153528: exit status 3 (18.643159127s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 18:53:26.835358   47755 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.210:22: connect: no route to host
	E0229 18:53:26.835379   47755 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.210:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-153528" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (138.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-631080 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-631080 create -f testdata/busybox.yaml: exit status 1 (43.957232ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-631080" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-631080 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-631080 -n old-k8s-version-631080
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-631080 -n old-k8s-version-631080: exit status 6 (235.749637ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 18:51:25.416312   47079 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-631080" does not appear in /home/jenkins/minikube-integration/18259-6428/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-631080" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-631080 -n old-k8s-version-631080
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-631080 -n old-k8s-version-631080: exit status 6 (230.381135ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 18:51:25.648877   47110 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-631080" does not appear in /home/jenkins/minikube-integration/18259-6428/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-631080" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (107.75s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-631080 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-631080 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m47.458465113s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/metrics-apiservice.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-deployment.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-service.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-631080 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-631080 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-631080 describe deploy/metrics-server -n kube-system: exit status 1 (43.433884ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-631080" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-631080 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-631080 -n old-k8s-version-631080
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-631080 -n old-k8s-version-631080: exit status 6 (243.54398ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 18:53:13.393740   47805 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-631080" does not appear in /home/jenkins/minikube-integration/18259-6428/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-631080" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (107.75s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-247197 -n no-preload-247197
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-247197 -n no-preload-247197: exit status 3 (3.198022594s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 18:52:18.611330   47367 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.72:22: connect: no route to host
	E0229 18:52:18.611356   47367 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.72:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-247197 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-247197 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153434612s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.72:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-247197 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-247197 -n no-preload-247197
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-247197 -n no-preload-247197: exit status 3 (3.062567948s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 18:52:27.827455   47455 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.72:22: connect: no route to host
	E0229 18:52:27.827477   47455 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.72:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-247197" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-991128 -n embed-certs-991128
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-991128 -n embed-certs-991128: exit status 3 (3.167736254s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 18:52:26.003339   47425 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.34:22: connect: no route to host
	E0229 18:52:26.003363   47425 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.34:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-991128 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-991128 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.151597789s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.34:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-991128 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-991128 -n embed-certs-991128
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-991128 -n embed-certs-991128: exit status 3 (3.064264462s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 18:52:35.219380   47567 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.34:22: connect: no route to host
	E0229 18:52:35.219401   47567 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.34:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-991128" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (774.88s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-631080 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-631080 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: exit status 109 (12m51.44285479s)

                                                
                                                
-- stdout --
	* [old-k8s-version-631080] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18259
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18259-6428/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6428/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the kvm2 driver based on existing profile
	* Starting control plane node old-k8s-version-631080 in cluster old-k8s-version-631080
	* Restarting existing kvm2 VM for "old-k8s-version-631080" ...
	* Preparing Kubernetes v1.16.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 18:53:14.936375   47919 out.go:291] Setting OutFile to fd 1 ...
	I0229 18:53:14.936620   47919 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:53:14.936629   47919 out.go:304] Setting ErrFile to fd 2...
	I0229 18:53:14.936633   47919 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:53:14.936816   47919 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6428/.minikube/bin
	I0229 18:53:14.937322   47919 out.go:298] Setting JSON to false
	I0229 18:53:14.938190   47919 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5739,"bootTime":1709227056,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 18:53:14.938251   47919 start.go:139] virtualization: kvm guest
	I0229 18:53:14.940367   47919 out.go:177] * [old-k8s-version-631080] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 18:53:14.941660   47919 out.go:177]   - MINIKUBE_LOCATION=18259
	I0229 18:53:14.941681   47919 notify.go:220] Checking for updates...
	I0229 18:53:14.944468   47919 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 18:53:14.946080   47919 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18259-6428/kubeconfig
	I0229 18:53:14.947748   47919 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6428/.minikube
	I0229 18:53:14.949415   47919 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 18:53:14.950860   47919 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 18:53:14.952713   47919 config.go:182] Loaded profile config "old-k8s-version-631080": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0229 18:53:14.953076   47919 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 18:53:14.953123   47919 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:53:14.967573   47919 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33429
	I0229 18:53:14.967955   47919 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:53:14.968486   47919 main.go:141] libmachine: Using API Version  1
	I0229 18:53:14.968546   47919 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:53:14.968931   47919 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:53:14.969146   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .DriverName
	I0229 18:53:14.971184   47919 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0229 18:53:14.972468   47919 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 18:53:14.972766   47919 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 18:53:14.972811   47919 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:53:14.987115   47919 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46041
	I0229 18:53:14.987513   47919 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:53:14.987940   47919 main.go:141] libmachine: Using API Version  1
	I0229 18:53:14.987962   47919 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:53:14.988255   47919 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:53:14.988422   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .DriverName
	I0229 18:53:15.022548   47919 out.go:177] * Using the kvm2 driver based on existing profile
	I0229 18:53:15.024062   47919 start.go:299] selected driver: kvm2
	I0229 18:53:15.024074   47919 start.go:903] validating driver "kvm2" against &{Name:old-k8s-version-631080 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-631080 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.83.214 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:53:15.024164   47919 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 18:53:15.024787   47919 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:53:15.024878   47919 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18259-6428/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 18:53:15.039209   47919 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 18:53:15.039551   47919 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0229 18:53:15.039612   47919 cni.go:84] Creating CNI manager for ""
	I0229 18:53:15.039629   47919 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 18:53:15.039639   47919 start_flags.go:323] config:
	{Name:old-k8s-version-631080 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-631080 Namespace:d
efault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.83.214 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:53:15.039773   47919 iso.go:125] acquiring lock: {Name:mk4b55cee5696fff78a1389276e4e011ad655e56 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:53:15.041723   47919 out.go:177] * Starting control plane node old-k8s-version-631080 in cluster old-k8s-version-631080
	I0229 18:53:15.042928   47919 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0229 18:53:15.042959   47919 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0229 18:53:15.042982   47919 cache.go:56] Caching tarball of preloaded images
	I0229 18:53:15.043107   47919 preload.go:174] Found /home/jenkins/minikube-integration/18259-6428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0229 18:53:15.043119   47919 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I0229 18:53:15.043216   47919 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/config.json ...
	I0229 18:53:15.043389   47919 start.go:365] acquiring machines lock for old-k8s-version-631080: {Name:mk08b242c6b6b7cd4a6843a7d3821a8b435540dd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 18:57:26.464131   47919 start.go:369] acquired machines lock for "old-k8s-version-631080" in 4m11.42071391s
	I0229 18:57:26.464193   47919 start.go:96] Skipping create...Using existing machine configuration
	I0229 18:57:26.464200   47919 fix.go:54] fixHost starting: 
	I0229 18:57:26.464621   47919 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 18:57:26.464657   47919 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:57:26.480155   47919 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46439
	I0229 18:57:26.480488   47919 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:57:26.481000   47919 main.go:141] libmachine: Using API Version  1
	I0229 18:57:26.481027   47919 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:57:26.481327   47919 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:57:26.481514   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .DriverName
	I0229 18:57:26.481669   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetState
	I0229 18:57:26.482869   47919 fix.go:102] recreateIfNeeded on old-k8s-version-631080: state=Stopped err=<nil>
	I0229 18:57:26.482885   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .DriverName
	W0229 18:57:26.483052   47919 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 18:57:26.485421   47919 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-631080" ...
	I0229 18:57:26.486586   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .Start
	I0229 18:57:26.486734   47919 main.go:141] libmachine: (old-k8s-version-631080) Ensuring networks are active...
	I0229 18:57:26.487377   47919 main.go:141] libmachine: (old-k8s-version-631080) Ensuring network default is active
	I0229 18:57:26.487679   47919 main.go:141] libmachine: (old-k8s-version-631080) Ensuring network mk-old-k8s-version-631080 is active
	I0229 18:57:26.488006   47919 main.go:141] libmachine: (old-k8s-version-631080) Getting domain xml...
	I0229 18:57:26.488624   47919 main.go:141] libmachine: (old-k8s-version-631080) Creating domain...
	I0229 18:57:27.689480   47919 main.go:141] libmachine: (old-k8s-version-631080) Waiting to get IP...
	I0229 18:57:27.690414   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:27.690858   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:57:27.690932   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:57:27.690848   48724 retry.go:31] will retry after 309.860592ms: waiting for machine to come up
	I0229 18:57:28.002437   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:28.002926   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:57:28.002959   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:57:28.002884   48724 retry.go:31] will retry after 298.018759ms: waiting for machine to come up
	I0229 18:57:28.302325   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:28.302849   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:57:28.302879   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:57:28.302801   48724 retry.go:31] will retry after 312.821928ms: waiting for machine to come up
	I0229 18:57:28.617315   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:28.617797   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:57:28.617831   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:57:28.617753   48724 retry.go:31] will retry after 373.960028ms: waiting for machine to come up
	I0229 18:57:28.993230   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:28.993860   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:57:28.993881   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:57:28.993809   48724 retry.go:31] will retry after 516.423282ms: waiting for machine to come up
	I0229 18:57:29.512208   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:29.512683   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:57:29.512718   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:57:29.512651   48724 retry.go:31] will retry after 776.839747ms: waiting for machine to come up
	I0229 18:57:30.290748   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:30.291228   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:57:30.291276   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:57:30.291195   48724 retry.go:31] will retry after 846.002471ms: waiting for machine to come up
	I0229 18:57:31.139734   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:31.140157   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:57:31.140177   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:57:31.140114   48724 retry.go:31] will retry after 1.01688411s: waiting for machine to come up
	I0229 18:57:32.158306   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:32.158845   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:57:32.158868   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:57:32.158827   48724 retry.go:31] will retry after 1.217119434s: waiting for machine to come up
	I0229 18:57:33.377121   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:33.377508   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:57:33.377538   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:57:33.377475   48724 retry.go:31] will retry after 1.566910779s: waiting for machine to come up
	I0229 18:57:34.946027   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:35.171546   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:57:35.171576   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:57:34.946337   48724 retry.go:31] will retry after 2.169140366s: waiting for machine to come up
	I0229 18:57:37.117080   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:37.117531   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:57:37.117564   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:57:37.117491   48724 retry.go:31] will retry after 2.187461538s: waiting for machine to come up
	I0229 18:57:39.307825   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:39.308159   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:57:39.308199   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:57:39.308131   48724 retry.go:31] will retry after 4.480150028s: waiting for machine to come up
	I0229 18:57:43.790597   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:43.791050   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:57:43.791076   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:57:43.790999   48724 retry.go:31] will retry after 3.830907426s: waiting for machine to come up
	I0229 18:57:47.623408   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:47.623861   47919 main.go:141] libmachine: (old-k8s-version-631080) Found IP for machine: 192.168.83.214
	I0229 18:57:47.623891   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has current primary IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:47.623900   47919 main.go:141] libmachine: (old-k8s-version-631080) Reserving static IP address...
	I0229 18:57:47.624340   47919 main.go:141] libmachine: (old-k8s-version-631080) Reserved static IP address: 192.168.83.214
	I0229 18:57:47.624374   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "old-k8s-version-631080", mac: "52:54:00:1b:b2:7e", ip: "192.168.83.214"} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:57:38 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:57:47.624390   47919 main.go:141] libmachine: (old-k8s-version-631080) Waiting for SSH to be available...
	I0229 18:57:47.624419   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | skip adding static IP to network mk-old-k8s-version-631080 - found existing host DHCP lease matching {name: "old-k8s-version-631080", mac: "52:54:00:1b:b2:7e", ip: "192.168.83.214"}
	I0229 18:57:47.624440   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | Getting to WaitForSSH function...
	I0229 18:57:47.626600   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:47.626881   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:57:38 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:57:47.626904   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:47.627042   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | Using SSH client type: external
	I0229 18:57:47.627070   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | Using SSH private key: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/old-k8s-version-631080/id_rsa (-rw-------)
	I0229 18:57:47.627106   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.214 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18259-6428/.minikube/machines/old-k8s-version-631080/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 18:57:47.627127   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | About to run SSH command:
	I0229 18:57:47.627146   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | exit 0
	I0229 18:57:47.751206   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | SSH cmd err, output: <nil>: 
	I0229 18:57:47.751569   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetConfigRaw
	I0229 18:57:47.752158   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetIP
	I0229 18:57:47.754701   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:47.755064   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:57:38 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:57:47.755089   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:47.755331   47919 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/config.json ...
	I0229 18:57:47.755551   47919 machine.go:88] provisioning docker machine ...
	I0229 18:57:47.755569   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .DriverName
	I0229 18:57:47.755772   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetMachineName
	I0229 18:57:47.755961   47919 buildroot.go:166] provisioning hostname "old-k8s-version-631080"
	I0229 18:57:47.755979   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetMachineName
	I0229 18:57:47.756102   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHHostname
	I0229 18:57:47.758421   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:47.758767   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:57:38 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:57:47.758796   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:47.758895   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHPort
	I0229 18:57:47.759065   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:57:47.759233   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:57:47.759387   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHUsername
	I0229 18:57:47.759548   47919 main.go:141] libmachine: Using SSH client type: native
	I0229 18:57:47.759718   47919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.83.214 22 <nil> <nil>}
	I0229 18:57:47.759730   47919 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-631080 && echo "old-k8s-version-631080" | sudo tee /etc/hostname
	I0229 18:57:47.879204   47919 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-631080
	
	I0229 18:57:47.879233   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHHostname
	I0229 18:57:47.881915   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:47.882207   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:57:38 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:57:47.882237   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:47.882407   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHPort
	I0229 18:57:47.882582   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:57:47.882737   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:57:47.882880   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHUsername
	I0229 18:57:47.883053   47919 main.go:141] libmachine: Using SSH client type: native
	I0229 18:57:47.883244   47919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.83.214 22 <nil> <nil>}
	I0229 18:57:47.883262   47919 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-631080' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-631080/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-631080' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 18:57:47.996920   47919 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 18:57:47.996948   47919 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18259-6428/.minikube CaCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18259-6428/.minikube}
	I0229 18:57:47.996964   47919 buildroot.go:174] setting up certificates
	I0229 18:57:47.996972   47919 provision.go:83] configureAuth start
	I0229 18:57:47.996980   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetMachineName
	I0229 18:57:47.997262   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetIP
	I0229 18:57:47.999702   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.000044   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:57:38 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:57:48.000076   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.000207   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHHostname
	I0229 18:57:48.002169   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.002457   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:57:38 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:57:48.002479   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.002552   47919 provision.go:138] copyHostCerts
	I0229 18:57:48.002600   47919 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem, removing ...
	I0229 18:57:48.002623   47919 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem
	I0229 18:57:48.002690   47919 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem (1082 bytes)
	I0229 18:57:48.002805   47919 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem, removing ...
	I0229 18:57:48.002820   47919 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem
	I0229 18:57:48.002854   47919 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem (1123 bytes)
	I0229 18:57:48.002936   47919 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem, removing ...
	I0229 18:57:48.002946   47919 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem
	I0229 18:57:48.002965   47919 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem (1675 bytes)
	I0229 18:57:48.003030   47919 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-631080 san=[192.168.83.214 192.168.83.214 localhost 127.0.0.1 minikube old-k8s-version-631080]
	I0229 18:57:48.095543   47919 provision.go:172] copyRemoteCerts
	I0229 18:57:48.095594   47919 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 18:57:48.095617   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHHostname
	I0229 18:57:48.098167   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.098411   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:57:38 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:57:48.098439   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.098593   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHPort
	I0229 18:57:48.098770   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:57:48.098910   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHUsername
	I0229 18:57:48.099046   47919 sshutil.go:53] new ssh client: &{IP:192.168.83.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/old-k8s-version-631080/id_rsa Username:docker}
	I0229 18:57:48.178774   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 18:57:48.204896   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0229 18:57:48.234660   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 18:57:48.264189   47919 provision.go:86] duration metric: configureAuth took 267.20486ms
	I0229 18:57:48.264215   47919 buildroot.go:189] setting minikube options for container-runtime
	I0229 18:57:48.264391   47919 config.go:182] Loaded profile config "old-k8s-version-631080": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0229 18:57:48.264464   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHHostname
	I0229 18:57:48.267066   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.267464   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:57:38 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:57:48.267500   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.267721   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHPort
	I0229 18:57:48.267913   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:57:48.268105   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:57:48.268260   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHUsername
	I0229 18:57:48.268425   47919 main.go:141] libmachine: Using SSH client type: native
	I0229 18:57:48.268630   47919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.83.214 22 <nil> <nil>}
	I0229 18:57:48.268658   47919 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0229 18:57:48.560376   47919 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0229 18:57:48.560401   47919 machine.go:91] provisioned docker machine in 804.837627ms
	I0229 18:57:48.560414   47919 start.go:300] post-start starting for "old-k8s-version-631080" (driver="kvm2")
	I0229 18:57:48.560426   47919 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 18:57:48.560450   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .DriverName
	I0229 18:57:48.560751   47919 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 18:57:48.560776   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHHostname
	I0229 18:57:48.563312   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.563638   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:57:38 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:57:48.563670   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.563776   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHPort
	I0229 18:57:48.563971   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:57:48.564126   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHUsername
	I0229 18:57:48.564295   47919 sshutil.go:53] new ssh client: &{IP:192.168.83.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/old-k8s-version-631080/id_rsa Username:docker}
	I0229 18:57:48.646996   47919 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 18:57:48.652329   47919 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 18:57:48.652356   47919 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6428/.minikube/addons for local assets ...
	I0229 18:57:48.652428   47919 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6428/.minikube/files for local assets ...
	I0229 18:57:48.652538   47919 filesync.go:149] local asset: /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem -> 136512.pem in /etc/ssl/certs
	I0229 18:57:48.652665   47919 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 18:57:48.663432   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem --> /etc/ssl/certs/136512.pem (1708 bytes)
	I0229 18:57:48.694980   47919 start.go:303] post-start completed in 134.554808ms
	I0229 18:57:48.695000   47919 fix.go:56] fixHost completed within 22.230801566s
	I0229 18:57:48.695033   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHHostname
	I0229 18:57:48.697788   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.698205   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:57:38 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:57:48.698231   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.698416   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHPort
	I0229 18:57:48.698633   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:57:48.698797   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:57:48.698941   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHUsername
	I0229 18:57:48.699118   47919 main.go:141] libmachine: Using SSH client type: native
	I0229 18:57:48.699327   47919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.83.214 22 <nil> <nil>}
	I0229 18:57:48.699349   47919 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0229 18:57:48.808714   47919 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709233068.793225740
	
	I0229 18:57:48.808740   47919 fix.go:206] guest clock: 1709233068.793225740
	I0229 18:57:48.808751   47919 fix.go:219] Guest: 2024-02-29 18:57:48.79322574 +0000 UTC Remote: 2024-02-29 18:57:48.695003912 +0000 UTC m=+273.807414604 (delta=98.221828ms)
	I0229 18:57:48.808793   47919 fix.go:190] guest clock delta is within tolerance: 98.221828ms
	I0229 18:57:48.808800   47919 start.go:83] releasing machines lock for "old-k8s-version-631080", held for 22.344627122s
	I0229 18:57:48.808832   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .DriverName
	I0229 18:57:48.809114   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetIP
	I0229 18:57:48.811872   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.812297   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:57:38 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:57:48.812336   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.812522   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .DriverName
	I0229 18:57:48.813072   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .DriverName
	I0229 18:57:48.813270   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .DriverName
	I0229 18:57:48.813347   47919 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 18:57:48.813392   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHHostname
	I0229 18:57:48.813509   47919 ssh_runner.go:195] Run: cat /version.json
	I0229 18:57:48.813536   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHHostname
	I0229 18:57:48.816200   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.816580   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:57:38 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:57:48.816607   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.816676   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.816753   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHPort
	I0229 18:57:48.816939   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:57:48.817097   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHUsername
	I0229 18:57:48.817244   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:57:38 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:57:48.817268   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.817293   47919 sshutil.go:53] new ssh client: &{IP:192.168.83.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/old-k8s-version-631080/id_rsa Username:docker}
	I0229 18:57:48.817420   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHPort
	I0229 18:57:48.817538   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:57:48.817643   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHUsername
	I0229 18:57:48.817769   47919 sshutil.go:53] new ssh client: &{IP:192.168.83.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/old-k8s-version-631080/id_rsa Username:docker}
	I0229 18:57:48.919708   47919 ssh_runner.go:195] Run: systemctl --version
	I0229 18:57:48.926381   47919 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0229 18:57:49.086263   47919 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 18:57:49.093350   47919 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 18:57:49.093427   47919 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 18:57:49.112686   47919 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 18:57:49.112716   47919 start.go:475] detecting cgroup driver to use...
	I0229 18:57:49.112784   47919 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 18:57:49.135232   47919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 18:57:49.152937   47919 docker.go:217] disabling cri-docker service (if available) ...
	I0229 18:57:49.152992   47919 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 18:57:49.172048   47919 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 18:57:49.190450   47919 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 18:57:49.341605   47919 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 18:57:49.539663   47919 docker.go:233] disabling docker service ...
	I0229 18:57:49.539733   47919 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 18:57:49.562110   47919 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 18:57:49.578761   47919 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 18:57:49.739044   47919 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 18:57:49.897866   47919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 18:57:49.918783   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 18:57:49.941241   47919 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0229 18:57:49.941328   47919 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:57:49.953131   47919 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0229 18:57:49.953215   47919 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:57:49.964850   47919 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:57:49.976035   47919 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:57:49.988017   47919 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 18:57:50.000990   47919 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 18:57:50.019124   47919 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 18:57:50.019177   47919 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0229 18:57:50.042881   47919 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 18:57:50.054219   47919 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:57:50.213793   47919 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0229 18:57:50.387473   47919 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0229 18:57:50.387536   47919 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0229 18:57:50.395113   47919 start.go:543] Will wait 60s for crictl version
	I0229 18:57:50.395177   47919 ssh_runner.go:195] Run: which crictl
	I0229 18:57:50.400166   47919 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 18:57:50.446910   47919 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0229 18:57:50.447015   47919 ssh_runner.go:195] Run: crio --version
	I0229 18:57:50.486139   47919 ssh_runner.go:195] Run: crio --version
	I0229 18:57:50.528290   47919 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.29.1 ...
	I0229 18:57:50.530077   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetIP
	I0229 18:57:50.533389   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:50.533761   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:57:38 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:57:50.533794   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:50.534001   47919 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0229 18:57:50.538857   47919 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:57:50.556961   47919 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0229 18:57:50.557028   47919 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 18:57:50.616925   47919 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0229 18:57:50.617001   47919 ssh_runner.go:195] Run: which lz4
	I0229 18:57:50.622857   47919 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0229 18:57:50.628035   47919 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 18:57:50.628070   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0229 18:57:52.679575   47919 crio.go:444] Took 2.056751 seconds to copy over tarball
	I0229 18:57:52.679656   47919 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 18:57:55.661321   47919 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.981628592s)
	I0229 18:57:55.661351   47919 crio.go:451] Took 2.981744 seconds to extract the tarball
	I0229 18:57:55.661363   47919 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 18:57:55.708924   47919 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 18:57:55.751627   47919 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0229 18:57:55.751650   47919 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0229 18:57:55.751726   47919 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:57:55.751752   47919 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 18:57:55.751758   47919 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0229 18:57:55.751735   47919 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 18:57:55.751751   47919 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0229 18:57:55.751772   47919 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 18:57:55.751864   47919 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0229 18:57:55.752153   47919 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 18:57:55.753139   47919 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0229 18:57:55.753456   47919 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:57:55.753467   47919 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 18:57:55.753476   47919 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 18:57:55.753476   47919 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 18:57:55.753476   47919 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0229 18:57:55.753486   47919 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0229 18:57:55.753547   47919 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 18:57:55.934620   47919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0229 18:57:55.988723   47919 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0229 18:57:55.988767   47919 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0229 18:57:55.988811   47919 ssh_runner.go:195] Run: which crictl
	I0229 18:57:55.993750   47919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0229 18:57:56.036192   47919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0229 18:57:56.037872   47919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0229 18:57:56.038123   47919 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0229 18:57:56.040846   47919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0229 18:57:56.046242   47919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0229 18:57:56.065126   47919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 18:57:56.077683   47919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0229 18:57:56.126642   47919 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0229 18:57:56.126683   47919 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 18:57:56.126741   47919 ssh_runner.go:195] Run: which crictl
	I0229 18:57:56.191928   47919 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0229 18:57:56.191980   47919 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 18:57:56.192006   47919 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0229 18:57:56.192037   47919 ssh_runner.go:195] Run: which crictl
	I0229 18:57:56.192045   47919 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0229 18:57:56.192086   47919 ssh_runner.go:195] Run: which crictl
	I0229 18:57:56.203773   47919 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0229 18:57:56.203819   47919 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 18:57:56.203863   47919 ssh_runner.go:195] Run: which crictl
	I0229 18:57:56.227761   47919 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0229 18:57:56.227799   47919 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 18:57:56.227832   47919 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0229 18:57:56.227856   47919 ssh_runner.go:195] Run: which crictl
	I0229 18:57:56.227864   47919 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0229 18:57:56.227876   47919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0229 18:57:56.227922   47919 ssh_runner.go:195] Run: which crictl
	I0229 18:57:56.227925   47919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0229 18:57:56.227956   47919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0229 18:57:56.227961   47919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0229 18:57:56.246645   47919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0229 18:57:56.344012   47919 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0229 18:57:56.344125   47919 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0229 18:57:56.346352   47919 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0229 18:57:56.361309   47919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 18:57:56.361484   47919 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0229 18:57:56.383942   47919 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0229 18:57:56.411697   47919 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0229 18:57:56.649625   47919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:57:56.801430   47919 cache_images.go:92] LoadImages completed in 1.049765957s
	W0229 18:57:56.801578   47919 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0: no such file or directory
	I0229 18:57:56.801670   47919 ssh_runner.go:195] Run: crio config
	I0229 18:57:56.872210   47919 cni.go:84] Creating CNI manager for ""
	I0229 18:57:56.872238   47919 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 18:57:56.872260   47919 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 18:57:56.872283   47919 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.214 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-631080 NodeName:old-k8s-version-631080 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.214"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.214 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0229 18:57:56.872458   47919 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.214
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-631080"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.214
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.214"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-631080
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.83.214:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 18:57:56.872545   47919 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-631080 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.83.214
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-631080 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 18:57:56.872620   47919 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0229 18:57:56.884571   47919 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 18:57:56.884647   47919 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 18:57:56.896167   47919 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0229 18:57:56.916824   47919 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 18:57:56.938739   47919 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I0229 18:57:56.961411   47919 ssh_runner.go:195] Run: grep 192.168.83.214	control-plane.minikube.internal$ /etc/hosts
	I0229 18:57:56.966134   47919 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.214	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:57:56.981089   47919 certs.go:56] Setting up /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080 for IP: 192.168.83.214
	I0229 18:57:56.981121   47919 certs.go:190] acquiring lock for shared ca certs: {Name:mk3505d2468da66ac20dc3ebf913782cecf1ba0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:57:56.981295   47919 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18259-6428/.minikube/ca.key
	I0229 18:57:56.981358   47919 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.key
	I0229 18:57:56.981465   47919 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/client.key
	I0229 18:57:56.981533   47919 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/apiserver.key.89a58109
	I0229 18:57:56.981586   47919 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/proxy-client.key
	I0229 18:57:56.981755   47919 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651.pem (1338 bytes)
	W0229 18:57:56.981791   47919 certs.go:433] ignoring /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651_empty.pem, impossibly tiny 0 bytes
	I0229 18:57:56.981806   47919 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem (1675 bytes)
	I0229 18:57:56.981845   47919 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem (1082 bytes)
	I0229 18:57:56.981878   47919 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem (1123 bytes)
	I0229 18:57:56.981910   47919 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem (1675 bytes)
	I0229 18:57:56.981961   47919 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem (1708 bytes)
	I0229 18:57:56.982889   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 18:57:57.015587   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0229 18:57:57.048698   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 18:57:57.078634   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 18:57:57.114008   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 18:57:57.146884   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0229 18:57:57.179560   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 18:57:57.211989   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0229 18:57:57.246936   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651.pem --> /usr/share/ca-certificates/13651.pem (1338 bytes)
	I0229 18:57:57.280651   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem --> /usr/share/ca-certificates/136512.pem (1708 bytes)
	I0229 18:57:57.310050   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 18:57:57.337439   47919 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 18:57:57.359100   47919 ssh_runner.go:195] Run: openssl version
	I0229 18:57:57.366111   47919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13651.pem && ln -fs /usr/share/ca-certificates/13651.pem /etc/ssl/certs/13651.pem"
	I0229 18:57:57.380593   47919 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13651.pem
	I0229 18:57:57.386703   47919 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 17:49 /usr/share/ca-certificates/13651.pem
	I0229 18:57:57.386771   47919 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13651.pem
	I0229 18:57:57.401429   47919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13651.pem /etc/ssl/certs/51391683.0"
	I0229 18:57:57.416516   47919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136512.pem && ln -fs /usr/share/ca-certificates/136512.pem /etc/ssl/certs/136512.pem"
	I0229 18:57:57.429645   47919 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136512.pem
	I0229 18:57:57.434960   47919 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 17:49 /usr/share/ca-certificates/136512.pem
	I0229 18:57:57.435010   47919 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136512.pem
	I0229 18:57:57.441855   47919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136512.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 18:57:57.457277   47919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 18:57:57.471345   47919 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:57:57.476556   47919 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 17:40 /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:57:57.476629   47919 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:57:57.483318   47919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 18:57:57.496355   47919 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 18:57:57.501976   47919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0229 18:57:57.509611   47919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0229 18:57:57.516861   47919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0229 18:57:57.523819   47919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0229 18:57:57.530959   47919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0229 18:57:57.539788   47919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0229 18:57:57.548575   47919 kubeadm.go:404] StartCluster: {Name:old-k8s-version-631080 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-631080 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.83.214 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:57:57.548663   47919 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0229 18:57:57.548731   47919 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 18:57:57.596234   47919 cri.go:89] found id: ""
	I0229 18:57:57.596327   47919 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 18:57:57.612827   47919 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0229 18:57:57.612856   47919 kubeadm.go:636] restartCluster start
	I0229 18:57:57.612903   47919 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0229 18:57:57.627565   47919 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:57.629049   47919 kubeconfig.go:135] verify returned: extract IP: "old-k8s-version-631080" does not appear in /home/jenkins/minikube-integration/18259-6428/kubeconfig
	I0229 18:57:57.630139   47919 kubeconfig.go:146] "old-k8s-version-631080" context is missing from /home/jenkins/minikube-integration/18259-6428/kubeconfig - will repair!
	I0229 18:57:57.631735   47919 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/kubeconfig: {Name:mk7125f243525b7f0feb85371d9c568ed8e0cf7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:57:57.634318   47919 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0229 18:57:57.648383   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:57:57.648458   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:57.663708   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:58.149010   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:57:58.149086   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:58.164430   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:58.649075   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:57:58.649186   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:58.663768   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:59.149370   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:57:59.149450   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:59.165089   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:59.648609   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:57:59.648690   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:59.665224   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:00.148880   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:00.148969   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:00.168561   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:00.649227   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:00.649308   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:00.668162   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:01.148539   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:01.148600   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:01.168347   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:01.649392   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:01.649484   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:01.663974   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:02.149462   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:02.149548   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:02.164757   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:02.649398   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:02.649522   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:02.664014   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:03.148502   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:03.148718   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:03.165374   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:03.648528   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:03.648594   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:03.663305   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:04.148760   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:04.148847   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:04.163480   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:04.649122   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:04.649226   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:04.663556   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:05.149421   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:05.149514   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:05.164236   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:05.648767   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:05.648856   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:05.664890   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:06.148979   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:06.149069   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:06.165186   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:06.649135   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:06.649245   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:06.665357   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:07.148896   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:07.148978   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:07.163358   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:07.649238   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:07.649309   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:07.665329   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:07.665359   47919 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0229 18:58:07.665368   47919 kubeadm.go:1135] stopping kube-system containers ...
	I0229 18:58:07.665378   47919 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0229 18:58:07.665433   47919 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 18:58:07.713980   47919 cri.go:89] found id: ""
	I0229 18:58:07.714045   47919 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0229 18:58:07.740723   47919 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 18:58:07.753838   47919 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 18:58:07.753914   47919 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 18:58:07.767175   47919 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0229 18:58:07.767197   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:58:07.902881   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:58:08.741237   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:58:08.970287   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:58:09.099101   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:58:09.214816   47919 api_server.go:52] waiting for apiserver process to appear ...
	I0229 18:58:09.214897   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:09.715311   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:10.215640   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:10.715115   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:11.215866   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:11.715307   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:12.215171   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:12.715206   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:13.215153   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:13.715048   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:14.215148   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:14.715628   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:15.215935   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:15.714969   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:16.215921   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:16.715200   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:17.215151   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:17.715520   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:18.215291   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:18.715662   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:19.215157   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:19.715037   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:20.215501   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:20.715745   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:21.214953   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:21.715762   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:22.215608   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:22.715556   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:23.215633   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:23.715012   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:24.215182   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:24.715944   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:25.215272   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:25.715667   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:26.215566   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:26.715860   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:27.214993   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:27.715679   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:28.215093   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:28.715081   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:29.215188   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:29.715981   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:30.215544   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:30.715080   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:31.215386   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:31.715180   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:32.215078   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:32.715087   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:33.215842   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:33.714950   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:34.215778   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:34.715201   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:35.215815   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:35.715203   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:36.215521   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:36.715525   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:37.215610   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:37.715474   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:38.215208   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:38.714993   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:39.215128   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:39.715944   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:40.215679   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:40.715898   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:41.215271   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:41.715702   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:42.214943   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:42.715085   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:43.215196   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:43.715164   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:44.215580   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:44.715155   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:45.215722   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:45.715879   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:46.215457   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:46.715123   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:47.216000   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:47.715056   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:48.215140   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:48.715448   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:49.215722   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:49.715058   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:50.214969   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:50.715535   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:51.215238   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:51.715704   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:52.215238   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:52.715897   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:53.215106   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:53.715753   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:54.215737   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:54.715449   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:55.215634   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:55.715221   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:56.215582   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:56.715580   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:57.215652   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:57.715281   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:58.215656   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:58.715759   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:59.216000   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:59.714984   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:00.215747   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:00.715123   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:01.214978   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:01.715726   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:02.215092   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:02.715148   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:03.215149   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:03.715717   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:04.215830   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:04.715275   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:05.215563   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:05.715180   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:06.215014   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:06.715750   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:07.215911   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:07.715662   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:08.215895   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:08.715565   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:09.214999   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:09.215096   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:09.270645   47919 cri.go:89] found id: ""
	I0229 18:59:09.270672   47919 logs.go:276] 0 containers: []
	W0229 18:59:09.270683   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:09.270690   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:09.270748   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:09.335492   47919 cri.go:89] found id: ""
	I0229 18:59:09.335519   47919 logs.go:276] 0 containers: []
	W0229 18:59:09.335530   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:09.335546   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:09.335627   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:09.405117   47919 cri.go:89] found id: ""
	I0229 18:59:09.405150   47919 logs.go:276] 0 containers: []
	W0229 18:59:09.405160   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:09.405167   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:09.405233   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:09.451096   47919 cri.go:89] found id: ""
	I0229 18:59:09.451128   47919 logs.go:276] 0 containers: []
	W0229 18:59:09.451140   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:09.451147   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:09.451226   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:09.498951   47919 cri.go:89] found id: ""
	I0229 18:59:09.498981   47919 logs.go:276] 0 containers: []
	W0229 18:59:09.499007   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:09.499014   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:09.499091   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:09.544447   47919 cri.go:89] found id: ""
	I0229 18:59:09.544474   47919 logs.go:276] 0 containers: []
	W0229 18:59:09.544484   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:09.544491   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:09.544548   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:09.597735   47919 cri.go:89] found id: ""
	I0229 18:59:09.597764   47919 logs.go:276] 0 containers: []
	W0229 18:59:09.597775   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:09.597782   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:09.597836   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:09.648458   47919 cri.go:89] found id: ""
	I0229 18:59:09.648480   47919 logs.go:276] 0 containers: []
	W0229 18:59:09.648489   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:09.648499   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:09.648515   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:09.700744   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:09.700792   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:09.717303   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:09.717332   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:09.845966   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:09.845984   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:09.845995   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:09.913069   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:09.913106   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:12.465591   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:12.479774   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:12.479825   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:12.517591   47919 cri.go:89] found id: ""
	I0229 18:59:12.517620   47919 logs.go:276] 0 containers: []
	W0229 18:59:12.517630   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:12.517637   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:12.517693   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:12.560735   47919 cri.go:89] found id: ""
	I0229 18:59:12.560758   47919 logs.go:276] 0 containers: []
	W0229 18:59:12.560769   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:12.560776   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:12.560843   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:12.600002   47919 cri.go:89] found id: ""
	I0229 18:59:12.600025   47919 logs.go:276] 0 containers: []
	W0229 18:59:12.600033   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:12.600043   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:12.600088   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:12.639223   47919 cri.go:89] found id: ""
	I0229 18:59:12.639252   47919 logs.go:276] 0 containers: []
	W0229 18:59:12.639264   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:12.639272   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:12.639339   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:12.682491   47919 cri.go:89] found id: ""
	I0229 18:59:12.682514   47919 logs.go:276] 0 containers: []
	W0229 18:59:12.682524   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:12.682531   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:12.682590   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:12.720669   47919 cri.go:89] found id: ""
	I0229 18:59:12.720693   47919 logs.go:276] 0 containers: []
	W0229 18:59:12.720700   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:12.720706   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:12.720773   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:12.764880   47919 cri.go:89] found id: ""
	I0229 18:59:12.764908   47919 logs.go:276] 0 containers: []
	W0229 18:59:12.764919   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:12.764926   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:12.765011   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:12.808987   47919 cri.go:89] found id: ""
	I0229 18:59:12.809019   47919 logs.go:276] 0 containers: []
	W0229 18:59:12.809052   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:12.809064   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:12.809079   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:12.866228   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:12.866263   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:12.886698   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:12.886729   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:12.963092   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:12.963116   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:12.963129   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:13.034485   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:13.034524   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:15.588224   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:15.603293   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:15.603368   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:15.648746   47919 cri.go:89] found id: ""
	I0229 18:59:15.648771   47919 logs.go:276] 0 containers: []
	W0229 18:59:15.648781   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:15.648788   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:15.648850   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:15.686420   47919 cri.go:89] found id: ""
	I0229 18:59:15.686447   47919 logs.go:276] 0 containers: []
	W0229 18:59:15.686463   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:15.686470   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:15.686533   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:15.729410   47919 cri.go:89] found id: ""
	I0229 18:59:15.729439   47919 logs.go:276] 0 containers: []
	W0229 18:59:15.729451   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:15.729458   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:15.729526   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:15.768078   47919 cri.go:89] found id: ""
	I0229 18:59:15.768108   47919 logs.go:276] 0 containers: []
	W0229 18:59:15.768119   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:15.768127   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:15.768188   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:15.806725   47919 cri.go:89] found id: ""
	I0229 18:59:15.806753   47919 logs.go:276] 0 containers: []
	W0229 18:59:15.806765   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:15.806772   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:15.806845   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:15.848566   47919 cri.go:89] found id: ""
	I0229 18:59:15.848593   47919 logs.go:276] 0 containers: []
	W0229 18:59:15.848600   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:15.848606   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:15.848657   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:15.888907   47919 cri.go:89] found id: ""
	I0229 18:59:15.888930   47919 logs.go:276] 0 containers: []
	W0229 18:59:15.888942   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:15.888948   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:15.889009   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:15.926653   47919 cri.go:89] found id: ""
	I0229 18:59:15.926686   47919 logs.go:276] 0 containers: []
	W0229 18:59:15.926708   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:15.926729   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:15.926746   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:15.976773   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:15.976812   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:15.995440   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:15.995477   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:16.103753   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:16.103774   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:16.103786   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:16.188282   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:16.188319   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:18.733451   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:18.748528   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:18.748607   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:18.785998   47919 cri.go:89] found id: ""
	I0229 18:59:18.786055   47919 logs.go:276] 0 containers: []
	W0229 18:59:18.786069   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:18.786078   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:18.786144   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:18.824234   47919 cri.go:89] found id: ""
	I0229 18:59:18.824260   47919 logs.go:276] 0 containers: []
	W0229 18:59:18.824270   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:18.824277   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:18.824339   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:18.868586   47919 cri.go:89] found id: ""
	I0229 18:59:18.868615   47919 logs.go:276] 0 containers: []
	W0229 18:59:18.868626   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:18.868633   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:18.868696   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:18.912622   47919 cri.go:89] found id: ""
	I0229 18:59:18.912647   47919 logs.go:276] 0 containers: []
	W0229 18:59:18.912655   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:18.912661   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:18.912708   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:18.952001   47919 cri.go:89] found id: ""
	I0229 18:59:18.952029   47919 logs.go:276] 0 containers: []
	W0229 18:59:18.952040   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:18.952047   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:18.952108   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:18.993085   47919 cri.go:89] found id: ""
	I0229 18:59:18.993130   47919 logs.go:276] 0 containers: []
	W0229 18:59:18.993140   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:18.993148   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:18.993209   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:19.041498   47919 cri.go:89] found id: ""
	I0229 18:59:19.041524   47919 logs.go:276] 0 containers: []
	W0229 18:59:19.041536   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:19.041543   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:19.041601   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:19.099107   47919 cri.go:89] found id: ""
	I0229 18:59:19.099132   47919 logs.go:276] 0 containers: []
	W0229 18:59:19.099143   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:19.099153   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:19.099169   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:19.158578   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:19.158615   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:19.173561   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:19.173590   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:19.248498   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:19.248524   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:19.248540   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:19.326904   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:19.326939   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:21.877087   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:21.892919   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:21.892976   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:21.931119   47919 cri.go:89] found id: ""
	I0229 18:59:21.931147   47919 logs.go:276] 0 containers: []
	W0229 18:59:21.931159   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:21.931167   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:21.931227   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:21.971884   47919 cri.go:89] found id: ""
	I0229 18:59:21.971908   47919 logs.go:276] 0 containers: []
	W0229 18:59:21.971916   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:21.971921   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:21.971975   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:22.019170   47919 cri.go:89] found id: ""
	I0229 18:59:22.019206   47919 logs.go:276] 0 containers: []
	W0229 18:59:22.019216   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:22.019232   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:22.019311   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:22.078057   47919 cri.go:89] found id: ""
	I0229 18:59:22.078083   47919 logs.go:276] 0 containers: []
	W0229 18:59:22.078093   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:22.078100   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:22.078162   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:22.128112   47919 cri.go:89] found id: ""
	I0229 18:59:22.128141   47919 logs.go:276] 0 containers: []
	W0229 18:59:22.128151   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:22.128157   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:22.128214   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:22.171354   47919 cri.go:89] found id: ""
	I0229 18:59:22.171382   47919 logs.go:276] 0 containers: []
	W0229 18:59:22.171393   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:22.171400   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:22.171466   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:22.225620   47919 cri.go:89] found id: ""
	I0229 18:59:22.225642   47919 logs.go:276] 0 containers: []
	W0229 18:59:22.225651   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:22.225658   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:22.225718   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:22.271291   47919 cri.go:89] found id: ""
	I0229 18:59:22.271320   47919 logs.go:276] 0 containers: []
	W0229 18:59:22.271332   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:22.271343   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:22.271358   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:22.336735   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:22.336765   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:22.354397   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:22.354425   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:22.432691   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:22.432713   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:22.432727   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:22.520239   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:22.520268   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:25.073478   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:25.105197   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:25.105262   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:25.165700   47919 cri.go:89] found id: ""
	I0229 18:59:25.165728   47919 logs.go:276] 0 containers: []
	W0229 18:59:25.165737   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:25.165744   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:25.165810   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:25.210864   47919 cri.go:89] found id: ""
	I0229 18:59:25.210892   47919 logs.go:276] 0 containers: []
	W0229 18:59:25.210904   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:25.210911   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:25.210974   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:25.257785   47919 cri.go:89] found id: ""
	I0229 18:59:25.257810   47919 logs.go:276] 0 containers: []
	W0229 18:59:25.257820   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:25.257827   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:25.257888   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:25.299816   47919 cri.go:89] found id: ""
	I0229 18:59:25.299844   47919 logs.go:276] 0 containers: []
	W0229 18:59:25.299855   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:25.299863   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:25.299933   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:25.339711   47919 cri.go:89] found id: ""
	I0229 18:59:25.339737   47919 logs.go:276] 0 containers: []
	W0229 18:59:25.339746   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:25.339751   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:25.339805   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:25.381107   47919 cri.go:89] found id: ""
	I0229 18:59:25.381135   47919 logs.go:276] 0 containers: []
	W0229 18:59:25.381145   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:25.381153   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:25.381211   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:25.429029   47919 cri.go:89] found id: ""
	I0229 18:59:25.429054   47919 logs.go:276] 0 containers: []
	W0229 18:59:25.429064   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:25.429071   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:25.429130   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:25.470598   47919 cri.go:89] found id: ""
	I0229 18:59:25.470629   47919 logs.go:276] 0 containers: []
	W0229 18:59:25.470637   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:25.470644   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:25.470655   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:25.516439   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:25.516476   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:25.569170   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:25.569204   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:25.584405   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:25.584430   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:25.663650   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:25.663671   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:25.663686   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:28.248036   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:28.263367   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:28.263440   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:28.302232   47919 cri.go:89] found id: ""
	I0229 18:59:28.302259   47919 logs.go:276] 0 containers: []
	W0229 18:59:28.302273   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:28.302281   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:28.302340   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:28.345147   47919 cri.go:89] found id: ""
	I0229 18:59:28.345173   47919 logs.go:276] 0 containers: []
	W0229 18:59:28.345185   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:28.345192   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:28.345250   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:28.383671   47919 cri.go:89] found id: ""
	I0229 18:59:28.383690   47919 logs.go:276] 0 containers: []
	W0229 18:59:28.383702   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:28.383709   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:28.383762   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:28.423737   47919 cri.go:89] found id: ""
	I0229 18:59:28.423762   47919 logs.go:276] 0 containers: []
	W0229 18:59:28.423769   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:28.423774   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:28.423826   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:28.465679   47919 cri.go:89] found id: ""
	I0229 18:59:28.465705   47919 logs.go:276] 0 containers: []
	W0229 18:59:28.465715   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:28.465723   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:28.465775   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:28.509703   47919 cri.go:89] found id: ""
	I0229 18:59:28.509731   47919 logs.go:276] 0 containers: []
	W0229 18:59:28.509742   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:28.509754   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:28.509826   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:28.549981   47919 cri.go:89] found id: ""
	I0229 18:59:28.550010   47919 logs.go:276] 0 containers: []
	W0229 18:59:28.550021   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:28.550027   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:28.550093   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:28.589802   47919 cri.go:89] found id: ""
	I0229 18:59:28.589827   47919 logs.go:276] 0 containers: []
	W0229 18:59:28.589834   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:28.589841   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:28.589853   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:28.670623   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:28.670644   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:28.670655   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:28.765451   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:28.765484   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:28.821538   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:28.821571   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:28.889401   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:28.889438   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:31.406911   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:31.422464   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:31.422541   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:31.460701   47919 cri.go:89] found id: ""
	I0229 18:59:31.460744   47919 logs.go:276] 0 containers: []
	W0229 18:59:31.460755   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:31.460762   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:31.460822   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:31.506966   47919 cri.go:89] found id: ""
	I0229 18:59:31.506996   47919 logs.go:276] 0 containers: []
	W0229 18:59:31.507007   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:31.507013   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:31.507088   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:31.542582   47919 cri.go:89] found id: ""
	I0229 18:59:31.542611   47919 logs.go:276] 0 containers: []
	W0229 18:59:31.542623   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:31.542631   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:31.542693   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:31.585470   47919 cri.go:89] found id: ""
	I0229 18:59:31.585496   47919 logs.go:276] 0 containers: []
	W0229 18:59:31.585508   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:31.585516   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:31.585574   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:31.627751   47919 cri.go:89] found id: ""
	I0229 18:59:31.627785   47919 logs.go:276] 0 containers: []
	W0229 18:59:31.627797   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:31.627805   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:31.627864   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:31.665988   47919 cri.go:89] found id: ""
	I0229 18:59:31.666009   47919 logs.go:276] 0 containers: []
	W0229 18:59:31.666017   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:31.666023   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:31.666081   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:31.712553   47919 cri.go:89] found id: ""
	I0229 18:59:31.712583   47919 logs.go:276] 0 containers: []
	W0229 18:59:31.712597   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:31.712603   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:31.712659   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:31.749904   47919 cri.go:89] found id: ""
	I0229 18:59:31.749944   47919 logs.go:276] 0 containers: []
	W0229 18:59:31.749954   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:31.749965   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:31.749980   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:31.843949   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:31.843992   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:31.898158   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:31.898186   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:31.949798   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:31.949831   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:31.965666   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:31.965697   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:32.040368   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:34.541417   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:34.558286   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:34.558345   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:34.602083   47919 cri.go:89] found id: ""
	I0229 18:59:34.602113   47919 logs.go:276] 0 containers: []
	W0229 18:59:34.602123   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:34.602130   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:34.602200   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:34.647108   47919 cri.go:89] found id: ""
	I0229 18:59:34.647136   47919 logs.go:276] 0 containers: []
	W0229 18:59:34.647146   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:34.647151   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:34.647220   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:34.692920   47919 cri.go:89] found id: ""
	I0229 18:59:34.692942   47919 logs.go:276] 0 containers: []
	W0229 18:59:34.692950   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:34.692956   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:34.693000   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:34.739367   47919 cri.go:89] found id: ""
	I0229 18:59:34.739397   47919 logs.go:276] 0 containers: []
	W0229 18:59:34.739408   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:34.739416   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:34.739478   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:34.794083   47919 cri.go:89] found id: ""
	I0229 18:59:34.794106   47919 logs.go:276] 0 containers: []
	W0229 18:59:34.794114   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:34.794120   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:34.794179   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:34.865371   47919 cri.go:89] found id: ""
	I0229 18:59:34.865400   47919 logs.go:276] 0 containers: []
	W0229 18:59:34.865412   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:34.865419   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:34.865476   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:34.906957   47919 cri.go:89] found id: ""
	I0229 18:59:34.906986   47919 logs.go:276] 0 containers: []
	W0229 18:59:34.906994   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:34.906999   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:34.907063   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:34.948548   47919 cri.go:89] found id: ""
	I0229 18:59:34.948570   47919 logs.go:276] 0 containers: []
	W0229 18:59:34.948577   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:34.948586   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:34.948598   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:35.036558   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:35.036594   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:35.080137   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:35.080169   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:35.130408   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:35.130436   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:35.148306   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:35.148332   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:35.222648   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:37.723158   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:37.741809   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:37.741885   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:37.787147   47919 cri.go:89] found id: ""
	I0229 18:59:37.787177   47919 logs.go:276] 0 containers: []
	W0229 18:59:37.787184   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:37.787192   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:37.787249   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:37.835589   47919 cri.go:89] found id: ""
	I0229 18:59:37.835613   47919 logs.go:276] 0 containers: []
	W0229 18:59:37.835623   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:37.835630   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:37.835687   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:37.895088   47919 cri.go:89] found id: ""
	I0229 18:59:37.895118   47919 logs.go:276] 0 containers: []
	W0229 18:59:37.895130   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:37.895137   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:37.895194   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:37.940837   47919 cri.go:89] found id: ""
	I0229 18:59:37.940867   47919 logs.go:276] 0 containers: []
	W0229 18:59:37.940878   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:37.940886   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:37.940946   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:37.989155   47919 cri.go:89] found id: ""
	I0229 18:59:37.989183   47919 logs.go:276] 0 containers: []
	W0229 18:59:37.989194   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:37.989203   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:37.989267   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:38.026517   47919 cri.go:89] found id: ""
	I0229 18:59:38.026543   47919 logs.go:276] 0 containers: []
	W0229 18:59:38.026553   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:38.026560   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:38.026623   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:38.063299   47919 cri.go:89] found id: ""
	I0229 18:59:38.063328   47919 logs.go:276] 0 containers: []
	W0229 18:59:38.063340   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:38.063347   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:38.063393   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:38.106278   47919 cri.go:89] found id: ""
	I0229 18:59:38.106298   47919 logs.go:276] 0 containers: []
	W0229 18:59:38.106305   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:38.106315   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:38.106330   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:38.182985   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:38.183008   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:38.183038   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:38.260280   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:38.260312   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:38.303648   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:38.303678   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:38.352889   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:38.352931   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:40.870416   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:40.885618   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:40.885692   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:40.924088   47919 cri.go:89] found id: ""
	I0229 18:59:40.924115   47919 logs.go:276] 0 containers: []
	W0229 18:59:40.924126   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:40.924133   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:40.924192   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:40.959485   47919 cri.go:89] found id: ""
	I0229 18:59:40.959513   47919 logs.go:276] 0 containers: []
	W0229 18:59:40.959524   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:40.959532   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:40.959593   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:41.009453   47919 cri.go:89] found id: ""
	I0229 18:59:41.009478   47919 logs.go:276] 0 containers: []
	W0229 18:59:41.009489   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:41.009496   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:41.009552   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:41.052894   47919 cri.go:89] found id: ""
	I0229 18:59:41.052922   47919 logs.go:276] 0 containers: []
	W0229 18:59:41.052933   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:41.052940   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:41.052997   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:41.098299   47919 cri.go:89] found id: ""
	I0229 18:59:41.098328   47919 logs.go:276] 0 containers: []
	W0229 18:59:41.098338   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:41.098345   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:41.098460   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:41.138287   47919 cri.go:89] found id: ""
	I0229 18:59:41.138313   47919 logs.go:276] 0 containers: []
	W0229 18:59:41.138324   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:41.138333   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:41.138395   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:41.176482   47919 cri.go:89] found id: ""
	I0229 18:59:41.176512   47919 logs.go:276] 0 containers: []
	W0229 18:59:41.176522   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:41.176529   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:41.176598   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:41.215284   47919 cri.go:89] found id: ""
	I0229 18:59:41.215307   47919 logs.go:276] 0 containers: []
	W0229 18:59:41.215317   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:41.215327   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:41.215342   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:41.230954   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:41.230982   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:41.313672   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:41.313696   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:41.313713   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:41.393574   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:41.393610   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:41.443384   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:41.443422   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:43.994323   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:44.008821   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:44.008892   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:44.050088   47919 cri.go:89] found id: ""
	I0229 18:59:44.050116   47919 logs.go:276] 0 containers: []
	W0229 18:59:44.050124   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:44.050130   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:44.050207   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:44.089721   47919 cri.go:89] found id: ""
	I0229 18:59:44.089749   47919 logs.go:276] 0 containers: []
	W0229 18:59:44.089756   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:44.089762   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:44.089818   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:44.132366   47919 cri.go:89] found id: ""
	I0229 18:59:44.132398   47919 logs.go:276] 0 containers: []
	W0229 18:59:44.132407   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:44.132412   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:44.132468   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:44.173568   47919 cri.go:89] found id: ""
	I0229 18:59:44.173591   47919 logs.go:276] 0 containers: []
	W0229 18:59:44.173598   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:44.173604   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:44.173661   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:44.214660   47919 cri.go:89] found id: ""
	I0229 18:59:44.214683   47919 logs.go:276] 0 containers: []
	W0229 18:59:44.214691   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:44.214696   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:44.214747   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:44.254355   47919 cri.go:89] found id: ""
	I0229 18:59:44.254386   47919 logs.go:276] 0 containers: []
	W0229 18:59:44.254397   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:44.254405   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:44.254464   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:44.293548   47919 cri.go:89] found id: ""
	I0229 18:59:44.293573   47919 logs.go:276] 0 containers: []
	W0229 18:59:44.293584   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:44.293591   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:44.293652   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:44.333335   47919 cri.go:89] found id: ""
	I0229 18:59:44.333361   47919 logs.go:276] 0 containers: []
	W0229 18:59:44.333372   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:44.333383   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:44.333398   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:44.348941   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:44.348973   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:44.419949   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:44.419968   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:44.419982   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:44.503445   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:44.503479   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:44.558694   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:44.558728   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:47.129362   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:47.145410   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:47.145483   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:47.194037   47919 cri.go:89] found id: ""
	I0229 18:59:47.194073   47919 logs.go:276] 0 containers: []
	W0229 18:59:47.194092   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:47.194100   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:47.194160   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:47.232500   47919 cri.go:89] found id: ""
	I0229 18:59:47.232528   47919 logs.go:276] 0 containers: []
	W0229 18:59:47.232559   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:47.232568   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:47.232634   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:47.271452   47919 cri.go:89] found id: ""
	I0229 18:59:47.271485   47919 logs.go:276] 0 containers: []
	W0229 18:59:47.271494   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:47.271501   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:47.271561   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:47.313482   47919 cri.go:89] found id: ""
	I0229 18:59:47.313509   47919 logs.go:276] 0 containers: []
	W0229 18:59:47.313520   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:47.313527   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:47.313590   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:47.354958   47919 cri.go:89] found id: ""
	I0229 18:59:47.354988   47919 logs.go:276] 0 containers: []
	W0229 18:59:47.354996   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:47.355001   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:47.355092   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:47.393312   47919 cri.go:89] found id: ""
	I0229 18:59:47.393338   47919 logs.go:276] 0 containers: []
	W0229 18:59:47.393349   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:47.393356   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:47.393415   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:47.431370   47919 cri.go:89] found id: ""
	I0229 18:59:47.431396   47919 logs.go:276] 0 containers: []
	W0229 18:59:47.431406   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:47.431413   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:47.431471   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:47.471659   47919 cri.go:89] found id: ""
	I0229 18:59:47.471683   47919 logs.go:276] 0 containers: []
	W0229 18:59:47.471692   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:47.471702   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:47.471715   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:47.530365   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:47.530405   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:47.558874   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:47.558903   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:47.644009   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:47.644033   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:47.644047   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:47.730063   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:47.730095   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:50.272945   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:50.288718   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:50.288796   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:50.331460   47919 cri.go:89] found id: ""
	I0229 18:59:50.331482   47919 logs.go:276] 0 containers: []
	W0229 18:59:50.331489   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:50.331495   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:50.331543   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:50.374960   47919 cri.go:89] found id: ""
	I0229 18:59:50.374989   47919 logs.go:276] 0 containers: []
	W0229 18:59:50.375000   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:50.375006   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:50.375076   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:50.415073   47919 cri.go:89] found id: ""
	I0229 18:59:50.415095   47919 logs.go:276] 0 containers: []
	W0229 18:59:50.415102   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:50.415107   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:50.415157   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:50.452511   47919 cri.go:89] found id: ""
	I0229 18:59:50.452554   47919 logs.go:276] 0 containers: []
	W0229 18:59:50.452563   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:50.452568   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:50.452612   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:50.498103   47919 cri.go:89] found id: ""
	I0229 18:59:50.498125   47919 logs.go:276] 0 containers: []
	W0229 18:59:50.498132   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:50.498137   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:50.498193   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:50.545366   47919 cri.go:89] found id: ""
	I0229 18:59:50.545397   47919 logs.go:276] 0 containers: []
	W0229 18:59:50.545409   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:50.545417   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:50.545487   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:50.608215   47919 cri.go:89] found id: ""
	I0229 18:59:50.608239   47919 logs.go:276] 0 containers: []
	W0229 18:59:50.608250   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:50.608257   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:50.608314   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:50.660835   47919 cri.go:89] found id: ""
	I0229 18:59:50.660861   47919 logs.go:276] 0 containers: []
	W0229 18:59:50.660881   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:50.660892   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:50.660907   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:50.749671   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:50.749712   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:50.797567   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:50.797595   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:50.848022   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:50.848059   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:50.862797   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:50.862820   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:50.934682   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:53.435804   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:53.451364   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:53.451440   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:53.500680   47919 cri.go:89] found id: ""
	I0229 18:59:53.500706   47919 logs.go:276] 0 containers: []
	W0229 18:59:53.500717   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:53.500744   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:53.500797   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:53.565306   47919 cri.go:89] found id: ""
	I0229 18:59:53.565334   47919 logs.go:276] 0 containers: []
	W0229 18:59:53.565344   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:53.565351   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:53.565410   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:53.631438   47919 cri.go:89] found id: ""
	I0229 18:59:53.631461   47919 logs.go:276] 0 containers: []
	W0229 18:59:53.631479   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:53.631486   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:53.631554   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:53.679482   47919 cri.go:89] found id: ""
	I0229 18:59:53.679506   47919 logs.go:276] 0 containers: []
	W0229 18:59:53.679516   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:53.679524   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:53.679580   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:53.722098   47919 cri.go:89] found id: ""
	I0229 18:59:53.722125   47919 logs.go:276] 0 containers: []
	W0229 18:59:53.722135   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:53.722142   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:53.722211   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:53.761804   47919 cri.go:89] found id: ""
	I0229 18:59:53.761838   47919 logs.go:276] 0 containers: []
	W0229 18:59:53.761849   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:53.761858   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:53.761942   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:53.806109   47919 cri.go:89] found id: ""
	I0229 18:59:53.806137   47919 logs.go:276] 0 containers: []
	W0229 18:59:53.806149   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:53.806157   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:53.806219   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:53.856794   47919 cri.go:89] found id: ""
	I0229 18:59:53.856823   47919 logs.go:276] 0 containers: []
	W0229 18:59:53.856831   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:53.856839   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:53.856849   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:53.908216   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:53.908252   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:53.923999   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:53.924038   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:54.000750   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:54.000772   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:54.000783   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:54.086840   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:54.086870   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:56.630728   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:56.647368   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:56.647440   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:56.693706   47919 cri.go:89] found id: ""
	I0229 18:59:56.693726   47919 logs.go:276] 0 containers: []
	W0229 18:59:56.693733   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:56.693738   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:56.693780   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:56.733377   47919 cri.go:89] found id: ""
	I0229 18:59:56.733404   47919 logs.go:276] 0 containers: []
	W0229 18:59:56.733415   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:56.733423   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:56.733491   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:56.772186   47919 cri.go:89] found id: ""
	I0229 18:59:56.772209   47919 logs.go:276] 0 containers: []
	W0229 18:59:56.772216   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:56.772222   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:56.772267   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:56.811919   47919 cri.go:89] found id: ""
	I0229 18:59:56.811964   47919 logs.go:276] 0 containers: []
	W0229 18:59:56.811977   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:56.811984   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:56.812035   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:56.849345   47919 cri.go:89] found id: ""
	I0229 18:59:56.849372   47919 logs.go:276] 0 containers: []
	W0229 18:59:56.849383   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:56.849390   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:56.849447   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:56.900091   47919 cri.go:89] found id: ""
	I0229 18:59:56.900119   47919 logs.go:276] 0 containers: []
	W0229 18:59:56.900129   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:56.900136   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:56.900193   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:56.937662   47919 cri.go:89] found id: ""
	I0229 18:59:56.937692   47919 logs.go:276] 0 containers: []
	W0229 18:59:56.937703   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:56.937710   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:56.937772   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:56.978195   47919 cri.go:89] found id: ""
	I0229 18:59:56.978224   47919 logs.go:276] 0 containers: []
	W0229 18:59:56.978234   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:56.978244   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:56.978259   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:57.059190   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:57.059223   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:57.101416   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:57.101442   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:57.156102   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:57.156140   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:57.171401   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:57.171435   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:57.243717   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:59.744588   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:59.760099   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:59.760175   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:59.798722   47919 cri.go:89] found id: ""
	I0229 18:59:59.798751   47919 logs.go:276] 0 containers: []
	W0229 18:59:59.798762   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:59.798770   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:59.798830   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:59.842423   47919 cri.go:89] found id: ""
	I0229 18:59:59.842452   47919 logs.go:276] 0 containers: []
	W0229 18:59:59.842463   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:59.842470   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:59.842532   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:59.883742   47919 cri.go:89] found id: ""
	I0229 18:59:59.883768   47919 logs.go:276] 0 containers: []
	W0229 18:59:59.883775   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:59.883781   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:59.883826   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:59.924062   47919 cri.go:89] found id: ""
	I0229 18:59:59.924091   47919 logs.go:276] 0 containers: []
	W0229 18:59:59.924102   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:59.924109   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:59.924166   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:59.962465   47919 cri.go:89] found id: ""
	I0229 18:59:59.962497   47919 logs.go:276] 0 containers: []
	W0229 18:59:59.962508   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:59.962515   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:59.962576   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:00.006069   47919 cri.go:89] found id: ""
	I0229 19:00:00.006103   47919 logs.go:276] 0 containers: []
	W0229 19:00:00.006114   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:00.006123   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:00.006185   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:00.047671   47919 cri.go:89] found id: ""
	I0229 19:00:00.047697   47919 logs.go:276] 0 containers: []
	W0229 19:00:00.047709   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:00.047715   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:00.047773   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:00.091452   47919 cri.go:89] found id: ""
	I0229 19:00:00.091475   47919 logs.go:276] 0 containers: []
	W0229 19:00:00.091486   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:00.091497   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:00.091511   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:00.143282   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:00.143313   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:00.158342   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:00.158366   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:00.239745   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:00.239774   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:00.239792   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:00.339048   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:00.339083   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:02.898414   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:02.914154   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:02.914221   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:02.956122   47919 cri.go:89] found id: ""
	I0229 19:00:02.956151   47919 logs.go:276] 0 containers: []
	W0229 19:00:02.956211   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:02.956225   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:02.956272   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:02.993609   47919 cri.go:89] found id: ""
	I0229 19:00:02.993636   47919 logs.go:276] 0 containers: []
	W0229 19:00:02.993646   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:02.993659   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:02.993720   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:03.038131   47919 cri.go:89] found id: ""
	I0229 19:00:03.038152   47919 logs.go:276] 0 containers: []
	W0229 19:00:03.038160   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:03.038165   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:03.038217   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:03.090845   47919 cri.go:89] found id: ""
	I0229 19:00:03.090866   47919 logs.go:276] 0 containers: []
	W0229 19:00:03.090873   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:03.090878   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:03.090935   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:03.129520   47919 cri.go:89] found id: ""
	I0229 19:00:03.129549   47919 logs.go:276] 0 containers: []
	W0229 19:00:03.129561   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:03.129568   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:03.129620   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:03.178528   47919 cri.go:89] found id: ""
	I0229 19:00:03.178557   47919 logs.go:276] 0 containers: []
	W0229 19:00:03.178567   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:03.178575   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:03.178631   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:03.218337   47919 cri.go:89] found id: ""
	I0229 19:00:03.218357   47919 logs.go:276] 0 containers: []
	W0229 19:00:03.218364   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:03.218369   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:03.218417   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:03.267682   47919 cri.go:89] found id: ""
	I0229 19:00:03.267713   47919 logs.go:276] 0 containers: []
	W0229 19:00:03.267726   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:03.267735   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:03.267753   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:03.286961   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:03.286987   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:03.376514   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:03.376535   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:03.376546   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:03.459824   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:03.459872   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:03.505821   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:03.505848   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:06.062525   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:06.077637   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:06.077708   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:06.119344   47919 cri.go:89] found id: ""
	I0229 19:00:06.119368   47919 logs.go:276] 0 containers: []
	W0229 19:00:06.119376   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:06.119381   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:06.119430   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:06.158209   47919 cri.go:89] found id: ""
	I0229 19:00:06.158232   47919 logs.go:276] 0 containers: []
	W0229 19:00:06.158239   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:06.158245   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:06.158291   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:06.198521   47919 cri.go:89] found id: ""
	I0229 19:00:06.198545   47919 logs.go:276] 0 containers: []
	W0229 19:00:06.198553   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:06.198559   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:06.198609   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:06.235872   47919 cri.go:89] found id: ""
	I0229 19:00:06.235919   47919 logs.go:276] 0 containers: []
	W0229 19:00:06.235930   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:06.235937   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:06.235998   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:06.282814   47919 cri.go:89] found id: ""
	I0229 19:00:06.282841   47919 logs.go:276] 0 containers: []
	W0229 19:00:06.282853   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:06.282860   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:06.282928   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:06.330549   47919 cri.go:89] found id: ""
	I0229 19:00:06.330572   47919 logs.go:276] 0 containers: []
	W0229 19:00:06.330580   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:06.330585   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:06.330632   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:06.399968   47919 cri.go:89] found id: ""
	I0229 19:00:06.399996   47919 logs.go:276] 0 containers: []
	W0229 19:00:06.400006   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:06.400012   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:06.400062   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:06.444899   47919 cri.go:89] found id: ""
	I0229 19:00:06.444921   47919 logs.go:276] 0 containers: []
	W0229 19:00:06.444929   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:06.444937   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:06.444950   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:06.460552   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:06.460580   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:06.532932   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:06.532956   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:06.532969   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:06.615130   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:06.615170   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:06.664499   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:06.664532   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:09.219226   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:09.236769   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:09.236829   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:09.292309   47919 cri.go:89] found id: ""
	I0229 19:00:09.292331   47919 logs.go:276] 0 containers: []
	W0229 19:00:09.292339   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:09.292345   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:09.292392   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:09.355237   47919 cri.go:89] found id: ""
	I0229 19:00:09.355259   47919 logs.go:276] 0 containers: []
	W0229 19:00:09.355267   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:09.355272   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:09.355319   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:09.397950   47919 cri.go:89] found id: ""
	I0229 19:00:09.397977   47919 logs.go:276] 0 containers: []
	W0229 19:00:09.397987   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:09.397995   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:09.398057   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:09.436751   47919 cri.go:89] found id: ""
	I0229 19:00:09.436779   47919 logs.go:276] 0 containers: []
	W0229 19:00:09.436789   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:09.436797   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:09.436862   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:09.480288   47919 cri.go:89] found id: ""
	I0229 19:00:09.480311   47919 logs.go:276] 0 containers: []
	W0229 19:00:09.480318   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:09.480324   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:09.480375   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:09.523576   47919 cri.go:89] found id: ""
	I0229 19:00:09.523599   47919 logs.go:276] 0 containers: []
	W0229 19:00:09.523606   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:09.523611   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:09.523658   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:09.562818   47919 cri.go:89] found id: ""
	I0229 19:00:09.562848   47919 logs.go:276] 0 containers: []
	W0229 19:00:09.562859   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:09.562872   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:09.562919   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:09.603331   47919 cri.go:89] found id: ""
	I0229 19:00:09.603357   47919 logs.go:276] 0 containers: []
	W0229 19:00:09.603369   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:09.603379   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:09.603393   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:09.652060   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:09.652089   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:09.668372   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:09.668394   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:09.745897   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:09.745923   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:09.745937   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:09.826981   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:09.827014   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:12.371447   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:12.385523   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:12.385613   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:12.422038   47919 cri.go:89] found id: ""
	I0229 19:00:12.422067   47919 logs.go:276] 0 containers: []
	W0229 19:00:12.422077   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:12.422084   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:12.422155   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:12.460443   47919 cri.go:89] found id: ""
	I0229 19:00:12.460470   47919 logs.go:276] 0 containers: []
	W0229 19:00:12.460487   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:12.460495   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:12.460551   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:12.502791   47919 cri.go:89] found id: ""
	I0229 19:00:12.502820   47919 logs.go:276] 0 containers: []
	W0229 19:00:12.502830   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:12.502838   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:12.502897   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:12.540738   47919 cri.go:89] found id: ""
	I0229 19:00:12.540769   47919 logs.go:276] 0 containers: []
	W0229 19:00:12.540780   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:12.540786   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:12.540845   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:12.580041   47919 cri.go:89] found id: ""
	I0229 19:00:12.580072   47919 logs.go:276] 0 containers: []
	W0229 19:00:12.580084   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:12.580091   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:12.580151   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:12.620721   47919 cri.go:89] found id: ""
	I0229 19:00:12.620750   47919 logs.go:276] 0 containers: []
	W0229 19:00:12.620758   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:12.620763   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:12.620820   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:12.659877   47919 cri.go:89] found id: ""
	I0229 19:00:12.659906   47919 logs.go:276] 0 containers: []
	W0229 19:00:12.659917   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:12.659925   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:12.659975   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:12.699133   47919 cri.go:89] found id: ""
	I0229 19:00:12.699160   47919 logs.go:276] 0 containers: []
	W0229 19:00:12.699170   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:12.699177   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:12.699188   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:12.742164   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:12.742189   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:12.792215   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:12.792248   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:12.808322   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:12.808344   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:12.879089   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:12.879114   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:12.879129   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:15.466778   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:15.480875   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:15.480945   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:15.525331   47919 cri.go:89] found id: ""
	I0229 19:00:15.525353   47919 logs.go:276] 0 containers: []
	W0229 19:00:15.525360   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:15.525366   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:15.525422   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:15.567787   47919 cri.go:89] found id: ""
	I0229 19:00:15.567819   47919 logs.go:276] 0 containers: []
	W0229 19:00:15.567831   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:15.567838   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:15.567923   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:15.609440   47919 cri.go:89] found id: ""
	I0229 19:00:15.609467   47919 logs.go:276] 0 containers: []
	W0229 19:00:15.609477   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:15.609484   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:15.609559   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:15.650113   47919 cri.go:89] found id: ""
	I0229 19:00:15.650142   47919 logs.go:276] 0 containers: []
	W0229 19:00:15.650153   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:15.650161   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:15.650223   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:15.691499   47919 cri.go:89] found id: ""
	I0229 19:00:15.691527   47919 logs.go:276] 0 containers: []
	W0229 19:00:15.691537   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:15.691544   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:15.691603   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:15.731199   47919 cri.go:89] found id: ""
	I0229 19:00:15.731227   47919 logs.go:276] 0 containers: []
	W0229 19:00:15.731239   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:15.731246   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:15.731324   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:15.772997   47919 cri.go:89] found id: ""
	I0229 19:00:15.773019   47919 logs.go:276] 0 containers: []
	W0229 19:00:15.773027   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:15.773032   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:15.773091   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:15.811223   47919 cri.go:89] found id: ""
	I0229 19:00:15.811244   47919 logs.go:276] 0 containers: []
	W0229 19:00:15.811252   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:15.811271   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:15.811283   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:15.862159   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:15.862196   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:15.877436   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:15.877460   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:15.948486   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:15.948513   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:15.948525   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:16.030585   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:16.030617   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:18.592020   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:18.607286   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:18.607368   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:18.647886   47919 cri.go:89] found id: ""
	I0229 19:00:18.647913   47919 logs.go:276] 0 containers: []
	W0229 19:00:18.647924   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:18.647951   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:18.648007   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:18.687394   47919 cri.go:89] found id: ""
	I0229 19:00:18.687420   47919 logs.go:276] 0 containers: []
	W0229 19:00:18.687430   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:18.687436   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:18.687491   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:18.734159   47919 cri.go:89] found id: ""
	I0229 19:00:18.734187   47919 logs.go:276] 0 containers: []
	W0229 19:00:18.734198   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:18.734205   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:18.734262   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:18.782950   47919 cri.go:89] found id: ""
	I0229 19:00:18.782989   47919 logs.go:276] 0 containers: []
	W0229 19:00:18.783000   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:18.783008   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:18.783089   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:18.818695   47919 cri.go:89] found id: ""
	I0229 19:00:18.818723   47919 logs.go:276] 0 containers: []
	W0229 19:00:18.818734   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:18.818742   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:18.818805   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:18.859479   47919 cri.go:89] found id: ""
	I0229 19:00:18.859504   47919 logs.go:276] 0 containers: []
	W0229 19:00:18.859515   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:18.859522   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:18.859580   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:18.902897   47919 cri.go:89] found id: ""
	I0229 19:00:18.902923   47919 logs.go:276] 0 containers: []
	W0229 19:00:18.902934   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:18.902942   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:18.903002   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:18.947708   47919 cri.go:89] found id: ""
	I0229 19:00:18.947731   47919 logs.go:276] 0 containers: []
	W0229 19:00:18.947742   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:18.947752   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:18.947772   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:19.025069   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:19.025092   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:19.025107   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:19.115589   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:19.115626   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:19.164930   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:19.164960   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:19.217497   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:19.217531   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:21.733516   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:21.748586   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:21.748648   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:21.788383   47919 cri.go:89] found id: ""
	I0229 19:00:21.788409   47919 logs.go:276] 0 containers: []
	W0229 19:00:21.788420   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:21.788429   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:21.788487   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:21.827147   47919 cri.go:89] found id: ""
	I0229 19:00:21.827176   47919 logs.go:276] 0 containers: []
	W0229 19:00:21.827187   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:21.827194   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:21.827255   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:21.867525   47919 cri.go:89] found id: ""
	I0229 19:00:21.867552   47919 logs.go:276] 0 containers: []
	W0229 19:00:21.867561   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:21.867570   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:21.867618   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:21.911542   47919 cri.go:89] found id: ""
	I0229 19:00:21.911564   47919 logs.go:276] 0 containers: []
	W0229 19:00:21.911573   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:21.911578   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:21.911629   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:21.949779   47919 cri.go:89] found id: ""
	I0229 19:00:21.949803   47919 logs.go:276] 0 containers: []
	W0229 19:00:21.949815   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:21.949821   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:21.949877   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:21.989663   47919 cri.go:89] found id: ""
	I0229 19:00:21.989692   47919 logs.go:276] 0 containers: []
	W0229 19:00:21.989701   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:21.989706   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:21.989750   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:22.040777   47919 cri.go:89] found id: ""
	I0229 19:00:22.040803   47919 logs.go:276] 0 containers: []
	W0229 19:00:22.040813   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:22.040820   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:22.040876   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:22.100661   47919 cri.go:89] found id: ""
	I0229 19:00:22.100682   47919 logs.go:276] 0 containers: []
	W0229 19:00:22.100689   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:22.100697   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:22.100707   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:22.165652   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:22.165682   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:22.180278   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:22.180301   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:22.250220   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:22.250242   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:22.250254   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:22.339122   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:22.339160   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:24.894485   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:24.910480   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:24.910555   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:24.949857   47919 cri.go:89] found id: ""
	I0229 19:00:24.949880   47919 logs.go:276] 0 containers: []
	W0229 19:00:24.949891   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:24.949898   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:24.949968   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:24.993325   47919 cri.go:89] found id: ""
	I0229 19:00:24.993355   47919 logs.go:276] 0 containers: []
	W0229 19:00:24.993366   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:24.993374   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:24.993431   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:25.053180   47919 cri.go:89] found id: ""
	I0229 19:00:25.053201   47919 logs.go:276] 0 containers: []
	W0229 19:00:25.053208   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:25.053214   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:25.053269   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:25.105886   47919 cri.go:89] found id: ""
	I0229 19:00:25.105912   47919 logs.go:276] 0 containers: []
	W0229 19:00:25.105919   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:25.105924   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:25.105969   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:25.161860   47919 cri.go:89] found id: ""
	I0229 19:00:25.161889   47919 logs.go:276] 0 containers: []
	W0229 19:00:25.161907   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:25.161918   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:25.161982   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:25.208566   47919 cri.go:89] found id: ""
	I0229 19:00:25.208591   47919 logs.go:276] 0 containers: []
	W0229 19:00:25.208601   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:25.208625   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:25.208690   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:25.252151   47919 cri.go:89] found id: ""
	I0229 19:00:25.252173   47919 logs.go:276] 0 containers: []
	W0229 19:00:25.252183   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:25.252190   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:25.252255   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:25.293860   47919 cri.go:89] found id: ""
	I0229 19:00:25.293892   47919 logs.go:276] 0 containers: []
	W0229 19:00:25.293903   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:25.293913   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:25.293926   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:25.343332   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:25.343367   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:25.357855   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:25.357883   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:25.438031   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:25.438052   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:25.438064   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:25.523752   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:25.523789   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:28.078701   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:28.103422   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:28.103514   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:28.149369   47919 cri.go:89] found id: ""
	I0229 19:00:28.149396   47919 logs.go:276] 0 containers: []
	W0229 19:00:28.149407   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:28.149414   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:28.149481   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:28.191312   47919 cri.go:89] found id: ""
	I0229 19:00:28.191340   47919 logs.go:276] 0 containers: []
	W0229 19:00:28.191350   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:28.191357   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:28.191422   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:28.232257   47919 cri.go:89] found id: ""
	I0229 19:00:28.232283   47919 logs.go:276] 0 containers: []
	W0229 19:00:28.232293   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:28.232301   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:28.232370   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:28.278477   47919 cri.go:89] found id: ""
	I0229 19:00:28.278502   47919 logs.go:276] 0 containers: []
	W0229 19:00:28.278512   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:28.278520   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:28.278580   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:28.319368   47919 cri.go:89] found id: ""
	I0229 19:00:28.319393   47919 logs.go:276] 0 containers: []
	W0229 19:00:28.319401   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:28.319406   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:28.319451   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:28.363604   47919 cri.go:89] found id: ""
	I0229 19:00:28.363628   47919 logs.go:276] 0 containers: []
	W0229 19:00:28.363636   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:28.363642   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:28.363688   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:28.403101   47919 cri.go:89] found id: ""
	I0229 19:00:28.403126   47919 logs.go:276] 0 containers: []
	W0229 19:00:28.403137   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:28.403144   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:28.403203   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:28.443915   47919 cri.go:89] found id: ""
	I0229 19:00:28.443939   47919 logs.go:276] 0 containers: []
	W0229 19:00:28.443949   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:28.443961   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:28.443974   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:28.459084   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:28.459112   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:28.531798   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:28.531827   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:28.531843   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:28.618141   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:28.618182   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:28.664993   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:28.665024   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:31.218793   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:31.234816   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:31.234890   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:31.273656   47919 cri.go:89] found id: ""
	I0229 19:00:31.273684   47919 logs.go:276] 0 containers: []
	W0229 19:00:31.273692   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:31.273698   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:31.273744   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:31.316292   47919 cri.go:89] found id: ""
	I0229 19:00:31.316314   47919 logs.go:276] 0 containers: []
	W0229 19:00:31.316322   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:31.316330   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:31.316391   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:31.356701   47919 cri.go:89] found id: ""
	I0229 19:00:31.356730   47919 logs.go:276] 0 containers: []
	W0229 19:00:31.356742   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:31.356760   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:31.356813   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:31.395796   47919 cri.go:89] found id: ""
	I0229 19:00:31.395822   47919 logs.go:276] 0 containers: []
	W0229 19:00:31.395830   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:31.395835   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:31.395884   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:31.436461   47919 cri.go:89] found id: ""
	I0229 19:00:31.436483   47919 logs.go:276] 0 containers: []
	W0229 19:00:31.436491   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:31.436496   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:31.436543   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:31.482802   47919 cri.go:89] found id: ""
	I0229 19:00:31.482830   47919 logs.go:276] 0 containers: []
	W0229 19:00:31.482840   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:31.482848   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:31.482895   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:31.525897   47919 cri.go:89] found id: ""
	I0229 19:00:31.525930   47919 logs.go:276] 0 containers: []
	W0229 19:00:31.525939   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:31.525949   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:31.526009   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:31.566323   47919 cri.go:89] found id: ""
	I0229 19:00:31.566350   47919 logs.go:276] 0 containers: []
	W0229 19:00:31.566362   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:31.566372   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:31.566388   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:31.618633   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:31.618674   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:31.634144   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:31.634166   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:31.712112   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:31.712136   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:31.712150   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:31.795159   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:31.795190   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:34.365419   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:34.380447   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:34.380521   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:34.422256   47919 cri.go:89] found id: ""
	I0229 19:00:34.422284   47919 logs.go:276] 0 containers: []
	W0229 19:00:34.422295   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:34.422302   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:34.422359   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:34.466548   47919 cri.go:89] found id: ""
	I0229 19:00:34.466578   47919 logs.go:276] 0 containers: []
	W0229 19:00:34.466588   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:34.466596   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:34.466654   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:34.508359   47919 cri.go:89] found id: ""
	I0229 19:00:34.508395   47919 logs.go:276] 0 containers: []
	W0229 19:00:34.508407   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:34.508414   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:34.508482   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:34.551284   47919 cri.go:89] found id: ""
	I0229 19:00:34.551308   47919 logs.go:276] 0 containers: []
	W0229 19:00:34.551319   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:34.551325   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:34.551371   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:34.593360   47919 cri.go:89] found id: ""
	I0229 19:00:34.593385   47919 logs.go:276] 0 containers: []
	W0229 19:00:34.593395   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:34.593403   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:34.593469   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:34.632097   47919 cri.go:89] found id: ""
	I0229 19:00:34.632117   47919 logs.go:276] 0 containers: []
	W0229 19:00:34.632124   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:34.632135   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:34.632180   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:34.679495   47919 cri.go:89] found id: ""
	I0229 19:00:34.679521   47919 logs.go:276] 0 containers: []
	W0229 19:00:34.679529   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:34.679534   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:34.679580   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:34.723322   47919 cri.go:89] found id: ""
	I0229 19:00:34.723351   47919 logs.go:276] 0 containers: []
	W0229 19:00:34.723361   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:34.723371   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:34.723387   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:34.741497   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:34.741525   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:34.833908   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:34.833932   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:34.833944   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:34.927172   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:34.927203   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:34.980487   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:34.980520   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:37.535829   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:37.551274   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:37.551342   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:37.590225   47919 cri.go:89] found id: ""
	I0229 19:00:37.590263   47919 logs.go:276] 0 containers: []
	W0229 19:00:37.590282   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:37.590289   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:37.590347   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:37.630546   47919 cri.go:89] found id: ""
	I0229 19:00:37.630574   47919 logs.go:276] 0 containers: []
	W0229 19:00:37.630585   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:37.630592   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:37.630651   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:37.676219   47919 cri.go:89] found id: ""
	I0229 19:00:37.676250   47919 logs.go:276] 0 containers: []
	W0229 19:00:37.676261   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:37.676268   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:37.676329   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:37.713689   47919 cri.go:89] found id: ""
	I0229 19:00:37.713712   47919 logs.go:276] 0 containers: []
	W0229 19:00:37.713721   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:37.713729   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:37.713791   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:37.767999   47919 cri.go:89] found id: ""
	I0229 19:00:37.768034   47919 logs.go:276] 0 containers: []
	W0229 19:00:37.768049   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:37.768057   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:37.768114   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:37.816836   47919 cri.go:89] found id: ""
	I0229 19:00:37.816865   47919 logs.go:276] 0 containers: []
	W0229 19:00:37.816876   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:37.816884   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:37.816948   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:37.876044   47919 cri.go:89] found id: ""
	I0229 19:00:37.876072   47919 logs.go:276] 0 containers: []
	W0229 19:00:37.876084   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:37.876091   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:37.876151   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:37.926075   47919 cri.go:89] found id: ""
	I0229 19:00:37.926110   47919 logs.go:276] 0 containers: []
	W0229 19:00:37.926122   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:37.926132   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:37.926147   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:38.004621   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:38.004648   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:38.004663   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:38.091456   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:38.091493   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:38.140118   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:38.140144   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:38.197206   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:38.197243   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:40.713817   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:40.731550   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:40.731613   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:40.787760   47919 cri.go:89] found id: ""
	I0229 19:00:40.787788   47919 logs.go:276] 0 containers: []
	W0229 19:00:40.787798   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:40.787806   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:40.787868   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:40.847842   47919 cri.go:89] found id: ""
	I0229 19:00:40.847870   47919 logs.go:276] 0 containers: []
	W0229 19:00:40.847881   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:40.847888   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:40.847956   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:40.888452   47919 cri.go:89] found id: ""
	I0229 19:00:40.888481   47919 logs.go:276] 0 containers: []
	W0229 19:00:40.888493   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:40.888501   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:40.888562   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:40.927727   47919 cri.go:89] found id: ""
	I0229 19:00:40.927749   47919 logs.go:276] 0 containers: []
	W0229 19:00:40.927757   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:40.927762   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:40.927821   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:40.967696   47919 cri.go:89] found id: ""
	I0229 19:00:40.967725   47919 logs.go:276] 0 containers: []
	W0229 19:00:40.967737   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:40.967745   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:40.967804   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:41.008092   47919 cri.go:89] found id: ""
	I0229 19:00:41.008117   47919 logs.go:276] 0 containers: []
	W0229 19:00:41.008127   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:41.008135   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:41.008190   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:41.049235   47919 cri.go:89] found id: ""
	I0229 19:00:41.049265   47919 logs.go:276] 0 containers: []
	W0229 19:00:41.049277   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:41.049285   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:41.049393   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:41.092962   47919 cri.go:89] found id: ""
	I0229 19:00:41.092988   47919 logs.go:276] 0 containers: []
	W0229 19:00:41.092999   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:41.093018   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:41.093033   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:41.146322   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:41.146368   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:41.161961   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:41.161986   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:41.248674   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:41.248705   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:41.248732   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:41.333647   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:41.333689   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:43.882007   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:43.897786   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:43.897860   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:43.943918   47919 cri.go:89] found id: ""
	I0229 19:00:43.943946   47919 logs.go:276] 0 containers: []
	W0229 19:00:43.943955   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:43.943960   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:43.944010   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:43.988622   47919 cri.go:89] found id: ""
	I0229 19:00:43.988643   47919 logs.go:276] 0 containers: []
	W0229 19:00:43.988650   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:43.988655   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:43.988699   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:44.036419   47919 cri.go:89] found id: ""
	I0229 19:00:44.036455   47919 logs.go:276] 0 containers: []
	W0229 19:00:44.036466   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:44.036471   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:44.036530   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:44.078018   47919 cri.go:89] found id: ""
	I0229 19:00:44.078046   47919 logs.go:276] 0 containers: []
	W0229 19:00:44.078056   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:44.078063   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:44.078119   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:44.116142   47919 cri.go:89] found id: ""
	I0229 19:00:44.116168   47919 logs.go:276] 0 containers: []
	W0229 19:00:44.116177   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:44.116183   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:44.116243   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:44.158804   47919 cri.go:89] found id: ""
	I0229 19:00:44.158826   47919 logs.go:276] 0 containers: []
	W0229 19:00:44.158833   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:44.158839   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:44.158889   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:44.204069   47919 cri.go:89] found id: ""
	I0229 19:00:44.204096   47919 logs.go:276] 0 containers: []
	W0229 19:00:44.204106   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:44.204114   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:44.204173   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:44.247904   47919 cri.go:89] found id: ""
	I0229 19:00:44.247935   47919 logs.go:276] 0 containers: []
	W0229 19:00:44.247949   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:44.247959   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:44.247973   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:44.338653   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:44.338690   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:44.384041   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:44.384069   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:44.439539   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:44.439575   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:44.455345   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:44.455372   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:44.538204   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:47.038895   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:47.054457   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:47.054539   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:47.099854   47919 cri.go:89] found id: ""
	I0229 19:00:47.099879   47919 logs.go:276] 0 containers: []
	W0229 19:00:47.099890   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:47.099899   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:47.099956   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:47.141354   47919 cri.go:89] found id: ""
	I0229 19:00:47.141381   47919 logs.go:276] 0 containers: []
	W0229 19:00:47.141391   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:47.141398   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:47.141454   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:47.181906   47919 cri.go:89] found id: ""
	I0229 19:00:47.181932   47919 logs.go:276] 0 containers: []
	W0229 19:00:47.181942   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:47.181949   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:47.182003   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:47.222505   47919 cri.go:89] found id: ""
	I0229 19:00:47.222530   47919 logs.go:276] 0 containers: []
	W0229 19:00:47.222538   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:47.222548   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:47.222603   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:47.265567   47919 cri.go:89] found id: ""
	I0229 19:00:47.265604   47919 logs.go:276] 0 containers: []
	W0229 19:00:47.265616   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:47.265625   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:47.265690   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:47.304698   47919 cri.go:89] found id: ""
	I0229 19:00:47.304723   47919 logs.go:276] 0 containers: []
	W0229 19:00:47.304730   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:47.304736   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:47.304781   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:47.344154   47919 cri.go:89] found id: ""
	I0229 19:00:47.344175   47919 logs.go:276] 0 containers: []
	W0229 19:00:47.344182   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:47.344187   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:47.344230   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:47.383849   47919 cri.go:89] found id: ""
	I0229 19:00:47.383878   47919 logs.go:276] 0 containers: []
	W0229 19:00:47.383889   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:47.383900   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:47.383915   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:47.458895   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:47.458914   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:47.458933   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:47.547776   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:47.547823   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:47.622606   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:47.622639   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:47.685327   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:47.685356   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:50.202151   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:50.218008   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:50.218063   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:50.265322   47919 cri.go:89] found id: ""
	I0229 19:00:50.265345   47919 logs.go:276] 0 containers: []
	W0229 19:00:50.265353   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:50.265358   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:50.265424   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:50.305646   47919 cri.go:89] found id: ""
	I0229 19:00:50.305669   47919 logs.go:276] 0 containers: []
	W0229 19:00:50.305677   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:50.305682   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:50.305732   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:50.342855   47919 cri.go:89] found id: ""
	I0229 19:00:50.342885   47919 logs.go:276] 0 containers: []
	W0229 19:00:50.342894   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:50.342899   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:50.342948   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:50.385365   47919 cri.go:89] found id: ""
	I0229 19:00:50.385396   47919 logs.go:276] 0 containers: []
	W0229 19:00:50.385404   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:50.385410   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:50.385456   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:50.425212   47919 cri.go:89] found id: ""
	I0229 19:00:50.425238   47919 logs.go:276] 0 containers: []
	W0229 19:00:50.425256   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:50.425263   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:50.425321   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:50.465325   47919 cri.go:89] found id: ""
	I0229 19:00:50.465355   47919 logs.go:276] 0 containers: []
	W0229 19:00:50.465366   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:50.465382   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:50.465455   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:50.516256   47919 cri.go:89] found id: ""
	I0229 19:00:50.516282   47919 logs.go:276] 0 containers: []
	W0229 19:00:50.516291   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:50.516297   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:50.516355   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:50.562233   47919 cri.go:89] found id: ""
	I0229 19:00:50.562262   47919 logs.go:276] 0 containers: []
	W0229 19:00:50.562272   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:50.562280   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:50.562292   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:50.660311   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:50.660346   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:50.702790   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:50.702815   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:50.752085   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:50.752123   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:50.768346   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:50.768378   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:50.842567   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:53.343011   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:53.358002   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:53.358072   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:53.398397   47919 cri.go:89] found id: ""
	I0229 19:00:53.398424   47919 logs.go:276] 0 containers: []
	W0229 19:00:53.398433   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:53.398440   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:53.398501   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:53.437020   47919 cri.go:89] found id: ""
	I0229 19:00:53.437048   47919 logs.go:276] 0 containers: []
	W0229 19:00:53.437059   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:53.437067   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:53.437116   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:53.473350   47919 cri.go:89] found id: ""
	I0229 19:00:53.473377   47919 logs.go:276] 0 containers: []
	W0229 19:00:53.473388   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:53.473395   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:53.473454   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:53.525678   47919 cri.go:89] found id: ""
	I0229 19:00:53.525701   47919 logs.go:276] 0 containers: []
	W0229 19:00:53.525708   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:53.525716   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:53.525772   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:53.595411   47919 cri.go:89] found id: ""
	I0229 19:00:53.595437   47919 logs.go:276] 0 containers: []
	W0229 19:00:53.595448   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:53.595456   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:53.595518   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:53.635890   47919 cri.go:89] found id: ""
	I0229 19:00:53.635916   47919 logs.go:276] 0 containers: []
	W0229 19:00:53.635923   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:53.635929   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:53.635992   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:53.674966   47919 cri.go:89] found id: ""
	I0229 19:00:53.674992   47919 logs.go:276] 0 containers: []
	W0229 19:00:53.675000   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:53.675005   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:53.675076   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:53.713839   47919 cri.go:89] found id: ""
	I0229 19:00:53.713860   47919 logs.go:276] 0 containers: []
	W0229 19:00:53.713868   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:53.713882   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:53.713896   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:53.765185   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:53.765219   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:53.780830   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:53.780855   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:53.858528   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:53.858552   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:53.858567   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:53.936002   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:53.936034   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:56.481406   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:56.498980   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:56.499059   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:56.557482   47919 cri.go:89] found id: ""
	I0229 19:00:56.557509   47919 logs.go:276] 0 containers: []
	W0229 19:00:56.557520   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:56.557528   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:56.557587   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:56.625912   47919 cri.go:89] found id: ""
	I0229 19:00:56.625941   47919 logs.go:276] 0 containers: []
	W0229 19:00:56.625952   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:56.625964   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:56.626023   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:56.663104   47919 cri.go:89] found id: ""
	I0229 19:00:56.663193   47919 logs.go:276] 0 containers: []
	W0229 19:00:56.663210   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:56.663217   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:56.663265   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:56.707473   47919 cri.go:89] found id: ""
	I0229 19:00:56.707494   47919 logs.go:276] 0 containers: []
	W0229 19:00:56.707502   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:56.707507   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:56.707564   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:56.752569   47919 cri.go:89] found id: ""
	I0229 19:00:56.752593   47919 logs.go:276] 0 containers: []
	W0229 19:00:56.752604   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:56.752611   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:56.752673   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:56.793618   47919 cri.go:89] found id: ""
	I0229 19:00:56.793660   47919 logs.go:276] 0 containers: []
	W0229 19:00:56.793672   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:56.793680   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:56.793741   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:56.833215   47919 cri.go:89] found id: ""
	I0229 19:00:56.833241   47919 logs.go:276] 0 containers: []
	W0229 19:00:56.833252   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:56.833259   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:56.833319   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:56.873162   47919 cri.go:89] found id: ""
	I0229 19:00:56.873187   47919 logs.go:276] 0 containers: []
	W0229 19:00:56.873195   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:56.873203   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:56.873219   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:56.887683   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:56.887707   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:56.957351   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:56.957369   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:56.957380   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:57.042415   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:57.042449   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:57.087636   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:57.087660   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:59.637662   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:59.652747   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:59.652815   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:59.692780   47919 cri.go:89] found id: ""
	I0229 19:00:59.692801   47919 logs.go:276] 0 containers: []
	W0229 19:00:59.692809   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:59.692814   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:59.692891   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:59.733445   47919 cri.go:89] found id: ""
	I0229 19:00:59.733474   47919 logs.go:276] 0 containers: []
	W0229 19:00:59.733482   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:59.733488   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:59.733535   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:59.769723   47919 cri.go:89] found id: ""
	I0229 19:00:59.769754   47919 logs.go:276] 0 containers: []
	W0229 19:00:59.769764   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:59.769770   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:59.769828   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:59.807810   47919 cri.go:89] found id: ""
	I0229 19:00:59.807837   47919 logs.go:276] 0 containers: []
	W0229 19:00:59.807848   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:59.807855   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:59.807916   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:59.849623   47919 cri.go:89] found id: ""
	I0229 19:00:59.849649   47919 logs.go:276] 0 containers: []
	W0229 19:00:59.849659   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:59.849666   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:59.849730   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:59.895593   47919 cri.go:89] found id: ""
	I0229 19:00:59.895620   47919 logs.go:276] 0 containers: []
	W0229 19:00:59.895631   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:59.895638   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:59.895698   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:59.935693   47919 cri.go:89] found id: ""
	I0229 19:00:59.935716   47919 logs.go:276] 0 containers: []
	W0229 19:00:59.935724   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:59.935729   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:59.935786   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:59.977655   47919 cri.go:89] found id: ""
	I0229 19:00:59.977685   47919 logs.go:276] 0 containers: []
	W0229 19:00:59.977693   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:59.977710   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:59.977725   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:59.992518   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:59.992545   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:00.075660   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:00.075679   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:00.075691   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:00.162338   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:00.162384   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:00.207000   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:00.207049   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:02.759942   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:02.776225   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:02.776293   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:02.812511   47919 cri.go:89] found id: ""
	I0229 19:01:02.812538   47919 logs.go:276] 0 containers: []
	W0229 19:01:02.812549   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:02.812556   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:02.812614   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:02.851417   47919 cri.go:89] found id: ""
	I0229 19:01:02.851448   47919 logs.go:276] 0 containers: []
	W0229 19:01:02.851467   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:02.851483   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:02.851560   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:02.894440   47919 cri.go:89] found id: ""
	I0229 19:01:02.894465   47919 logs.go:276] 0 containers: []
	W0229 19:01:02.894475   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:02.894487   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:02.894542   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:02.931046   47919 cri.go:89] found id: ""
	I0229 19:01:02.931075   47919 logs.go:276] 0 containers: []
	W0229 19:01:02.931084   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:02.931092   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:02.931150   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:02.971204   47919 cri.go:89] found id: ""
	I0229 19:01:02.971226   47919 logs.go:276] 0 containers: []
	W0229 19:01:02.971233   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:02.971238   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:02.971307   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:03.011695   47919 cri.go:89] found id: ""
	I0229 19:01:03.011723   47919 logs.go:276] 0 containers: []
	W0229 19:01:03.011734   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:03.011741   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:03.011796   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:03.054738   47919 cri.go:89] found id: ""
	I0229 19:01:03.054763   47919 logs.go:276] 0 containers: []
	W0229 19:01:03.054775   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:03.054782   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:03.054857   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:03.099242   47919 cri.go:89] found id: ""
	I0229 19:01:03.099267   47919 logs.go:276] 0 containers: []
	W0229 19:01:03.099278   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:03.099289   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:03.099303   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:03.148748   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:03.148778   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:03.164550   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:03.164578   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:03.241564   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:03.241586   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:03.241601   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:03.329350   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:03.329384   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:05.884415   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:05.901979   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:05.902044   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:05.946382   47919 cri.go:89] found id: ""
	I0229 19:01:05.946407   47919 logs.go:276] 0 containers: []
	W0229 19:01:05.946415   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:05.946421   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:05.946488   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:05.991783   47919 cri.go:89] found id: ""
	I0229 19:01:05.991807   47919 logs.go:276] 0 containers: []
	W0229 19:01:05.991816   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:05.991822   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:05.991879   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:06.034390   47919 cri.go:89] found id: ""
	I0229 19:01:06.034417   47919 logs.go:276] 0 containers: []
	W0229 19:01:06.034426   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:06.034431   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:06.034475   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:06.078417   47919 cri.go:89] found id: ""
	I0229 19:01:06.078445   47919 logs.go:276] 0 containers: []
	W0229 19:01:06.078456   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:06.078463   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:06.078527   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:06.119892   47919 cri.go:89] found id: ""
	I0229 19:01:06.119927   47919 logs.go:276] 0 containers: []
	W0229 19:01:06.119938   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:06.119952   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:06.120008   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:06.159308   47919 cri.go:89] found id: ""
	I0229 19:01:06.159332   47919 logs.go:276] 0 containers: []
	W0229 19:01:06.159339   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:06.159346   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:06.159410   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:06.208715   47919 cri.go:89] found id: ""
	I0229 19:01:06.208742   47919 logs.go:276] 0 containers: []
	W0229 19:01:06.208751   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:06.208756   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:06.208812   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:06.253831   47919 cri.go:89] found id: ""
	I0229 19:01:06.253858   47919 logs.go:276] 0 containers: []
	W0229 19:01:06.253866   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:06.253881   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:06.253895   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:06.315105   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:06.315141   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:06.349340   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:06.349386   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:06.431456   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:06.431477   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:06.431492   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:06.517754   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:06.517783   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:09.064267   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:09.078751   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:09.078822   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:09.130371   47919 cri.go:89] found id: ""
	I0229 19:01:09.130396   47919 logs.go:276] 0 containers: []
	W0229 19:01:09.130404   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:09.130410   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:09.130461   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:09.166312   47919 cri.go:89] found id: ""
	I0229 19:01:09.166340   47919 logs.go:276] 0 containers: []
	W0229 19:01:09.166351   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:09.166359   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:09.166415   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:09.202957   47919 cri.go:89] found id: ""
	I0229 19:01:09.202978   47919 logs.go:276] 0 containers: []
	W0229 19:01:09.202985   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:09.202991   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:09.203050   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:09.242350   47919 cri.go:89] found id: ""
	I0229 19:01:09.242380   47919 logs.go:276] 0 containers: []
	W0229 19:01:09.242391   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:09.242399   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:09.242455   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:09.300471   47919 cri.go:89] found id: ""
	I0229 19:01:09.300492   47919 logs.go:276] 0 containers: []
	W0229 19:01:09.300500   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:09.300505   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:09.300568   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:09.356861   47919 cri.go:89] found id: ""
	I0229 19:01:09.356886   47919 logs.go:276] 0 containers: []
	W0229 19:01:09.356893   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:09.356898   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:09.356965   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:09.411042   47919 cri.go:89] found id: ""
	I0229 19:01:09.411067   47919 logs.go:276] 0 containers: []
	W0229 19:01:09.411075   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:09.411080   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:09.411136   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:09.446312   47919 cri.go:89] found id: ""
	I0229 19:01:09.446336   47919 logs.go:276] 0 containers: []
	W0229 19:01:09.446347   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:09.446356   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:09.446367   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:09.492195   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:09.492227   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:09.541943   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:09.541973   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:09.557347   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:09.557373   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:09.635319   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:09.635363   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:09.635379   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:12.224271   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:12.243330   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:12.243403   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:12.285525   47919 cri.go:89] found id: ""
	I0229 19:01:12.285547   47919 logs.go:276] 0 containers: []
	W0229 19:01:12.285556   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:12.285561   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:12.285617   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:12.347511   47919 cri.go:89] found id: ""
	I0229 19:01:12.347535   47919 logs.go:276] 0 containers: []
	W0229 19:01:12.347543   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:12.347548   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:12.347593   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:12.392145   47919 cri.go:89] found id: ""
	I0229 19:01:12.392207   47919 logs.go:276] 0 containers: []
	W0229 19:01:12.392231   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:12.392248   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:12.392366   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:12.430238   47919 cri.go:89] found id: ""
	I0229 19:01:12.430268   47919 logs.go:276] 0 containers: []
	W0229 19:01:12.430278   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:12.430286   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:12.430345   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:12.473019   47919 cri.go:89] found id: ""
	I0229 19:01:12.473054   47919 logs.go:276] 0 containers: []
	W0229 19:01:12.473065   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:12.473072   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:12.473131   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:12.510653   47919 cri.go:89] found id: ""
	I0229 19:01:12.510681   47919 logs.go:276] 0 containers: []
	W0229 19:01:12.510692   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:12.510699   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:12.510759   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:12.548137   47919 cri.go:89] found id: ""
	I0229 19:01:12.548163   47919 logs.go:276] 0 containers: []
	W0229 19:01:12.548171   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:12.548176   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:12.548232   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:12.588416   47919 cri.go:89] found id: ""
	I0229 19:01:12.588435   47919 logs.go:276] 0 containers: []
	W0229 19:01:12.588443   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:12.588452   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:12.588467   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:12.603651   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:12.603681   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:12.681060   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:12.681081   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:12.681094   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:12.764839   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:12.764870   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:12.807178   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:12.807202   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:15.357205   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:15.382491   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:15.382571   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:15.422538   47919 cri.go:89] found id: ""
	I0229 19:01:15.422561   47919 logs.go:276] 0 containers: []
	W0229 19:01:15.422568   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:15.422577   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:15.422635   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:15.464564   47919 cri.go:89] found id: ""
	I0229 19:01:15.464593   47919 logs.go:276] 0 containers: []
	W0229 19:01:15.464601   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:15.464607   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:15.464662   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:15.502625   47919 cri.go:89] found id: ""
	I0229 19:01:15.502650   47919 logs.go:276] 0 containers: []
	W0229 19:01:15.502662   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:15.502669   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:15.502724   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:15.543187   47919 cri.go:89] found id: ""
	I0229 19:01:15.543215   47919 logs.go:276] 0 containers: []
	W0229 19:01:15.543229   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:15.543234   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:15.543283   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:15.585273   47919 cri.go:89] found id: ""
	I0229 19:01:15.585296   47919 logs.go:276] 0 containers: []
	W0229 19:01:15.585306   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:15.585314   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:15.585386   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:15.626180   47919 cri.go:89] found id: ""
	I0229 19:01:15.626208   47919 logs.go:276] 0 containers: []
	W0229 19:01:15.626219   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:15.626227   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:15.626288   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:15.670572   47919 cri.go:89] found id: ""
	I0229 19:01:15.670596   47919 logs.go:276] 0 containers: []
	W0229 19:01:15.670604   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:15.670610   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:15.670657   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:15.710549   47919 cri.go:89] found id: ""
	I0229 19:01:15.710587   47919 logs.go:276] 0 containers: []
	W0229 19:01:15.710595   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:15.710604   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:15.710618   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:15.765148   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:15.765180   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:15.780717   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:15.780742   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:15.852811   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:15.852835   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:15.852856   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:15.930728   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:15.930759   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:18.483798   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:18.497545   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:18.497611   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:18.540226   47919 cri.go:89] found id: ""
	I0229 19:01:18.540256   47919 logs.go:276] 0 containers: []
	W0229 19:01:18.540266   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:18.540274   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:18.540336   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:18.578106   47919 cri.go:89] found id: ""
	I0229 19:01:18.578124   47919 logs.go:276] 0 containers: []
	W0229 19:01:18.578134   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:18.578142   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:18.578192   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:18.617138   47919 cri.go:89] found id: ""
	I0229 19:01:18.617167   47919 logs.go:276] 0 containers: []
	W0229 19:01:18.617178   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:18.617185   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:18.617242   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:18.654667   47919 cri.go:89] found id: ""
	I0229 19:01:18.654762   47919 logs.go:276] 0 containers: []
	W0229 19:01:18.654779   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:18.654787   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:18.654845   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:18.695837   47919 cri.go:89] found id: ""
	I0229 19:01:18.695859   47919 logs.go:276] 0 containers: []
	W0229 19:01:18.695866   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:18.695875   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:18.695929   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:18.738178   47919 cri.go:89] found id: ""
	I0229 19:01:18.738199   47919 logs.go:276] 0 containers: []
	W0229 19:01:18.738206   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:18.738211   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:18.738259   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:18.777018   47919 cri.go:89] found id: ""
	I0229 19:01:18.777044   47919 logs.go:276] 0 containers: []
	W0229 19:01:18.777052   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:18.777058   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:18.777102   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:18.820701   47919 cri.go:89] found id: ""
	I0229 19:01:18.820723   47919 logs.go:276] 0 containers: []
	W0229 19:01:18.820734   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:18.820746   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:18.820762   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:18.907150   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:18.907182   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:18.950363   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:18.950393   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:18.999446   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:18.999479   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:19.020681   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:19.020714   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:19.139305   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:21.640062   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:21.654739   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:21.654799   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:21.701885   47919 cri.go:89] found id: ""
	I0229 19:01:21.701912   47919 logs.go:276] 0 containers: []
	W0229 19:01:21.701921   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:21.701929   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:21.701987   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:21.746736   47919 cri.go:89] found id: ""
	I0229 19:01:21.746767   47919 logs.go:276] 0 containers: []
	W0229 19:01:21.746780   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:21.746787   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:21.746847   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:21.784830   47919 cri.go:89] found id: ""
	I0229 19:01:21.784851   47919 logs.go:276] 0 containers: []
	W0229 19:01:21.784859   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:21.784865   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:21.784911   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:21.824122   47919 cri.go:89] found id: ""
	I0229 19:01:21.824151   47919 logs.go:276] 0 containers: []
	W0229 19:01:21.824162   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:21.824171   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:21.824217   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:21.869937   47919 cri.go:89] found id: ""
	I0229 19:01:21.869967   47919 logs.go:276] 0 containers: []
	W0229 19:01:21.869979   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:21.869986   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:21.870043   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:21.909902   47919 cri.go:89] found id: ""
	I0229 19:01:21.909928   47919 logs.go:276] 0 containers: []
	W0229 19:01:21.909939   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:21.909946   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:21.910005   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:21.953980   47919 cri.go:89] found id: ""
	I0229 19:01:21.954021   47919 logs.go:276] 0 containers: []
	W0229 19:01:21.954033   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:21.954040   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:21.954108   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:21.997483   47919 cri.go:89] found id: ""
	I0229 19:01:21.997510   47919 logs.go:276] 0 containers: []
	W0229 19:01:21.997521   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:21.997531   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:21.997546   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:22.108610   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:22.108639   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:22.153571   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:22.153596   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:22.204525   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:22.204555   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:22.219217   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:22.219241   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:22.294794   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:24.795157   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:24.811292   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:24.811363   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:24.854354   47919 cri.go:89] found id: ""
	I0229 19:01:24.854387   47919 logs.go:276] 0 containers: []
	W0229 19:01:24.854396   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:24.854402   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:24.854455   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:24.890800   47919 cri.go:89] found id: ""
	I0229 19:01:24.890828   47919 logs.go:276] 0 containers: []
	W0229 19:01:24.890838   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:24.890844   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:24.890900   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:24.930961   47919 cri.go:89] found id: ""
	I0229 19:01:24.930983   47919 logs.go:276] 0 containers: []
	W0229 19:01:24.930991   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:24.931001   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:24.931073   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:24.968719   47919 cri.go:89] found id: ""
	I0229 19:01:24.968740   47919 logs.go:276] 0 containers: []
	W0229 19:01:24.968747   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:24.968752   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:24.968809   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:25.012723   47919 cri.go:89] found id: ""
	I0229 19:01:25.012746   47919 logs.go:276] 0 containers: []
	W0229 19:01:25.012756   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:25.012763   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:25.012821   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:25.064388   47919 cri.go:89] found id: ""
	I0229 19:01:25.064412   47919 logs.go:276] 0 containers: []
	W0229 19:01:25.064422   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:25.064435   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:25.064496   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:25.122256   47919 cri.go:89] found id: ""
	I0229 19:01:25.122277   47919 logs.go:276] 0 containers: []
	W0229 19:01:25.122286   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:25.122291   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:25.122335   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:25.165487   47919 cri.go:89] found id: ""
	I0229 19:01:25.165515   47919 logs.go:276] 0 containers: []
	W0229 19:01:25.165526   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:25.165536   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:25.165557   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:25.249294   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:25.249333   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:25.297013   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:25.297048   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:25.346276   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:25.346309   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:25.362604   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:25.362635   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:25.434586   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:27.935727   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:27.950680   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:27.950750   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:27.989253   47919 cri.go:89] found id: ""
	I0229 19:01:27.989282   47919 logs.go:276] 0 containers: []
	W0229 19:01:27.989293   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:27.989300   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:27.989357   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:28.039714   47919 cri.go:89] found id: ""
	I0229 19:01:28.039741   47919 logs.go:276] 0 containers: []
	W0229 19:01:28.039750   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:28.039763   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:28.039828   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:28.102860   47919 cri.go:89] found id: ""
	I0229 19:01:28.102886   47919 logs.go:276] 0 containers: []
	W0229 19:01:28.102897   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:28.102904   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:28.102971   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:28.160075   47919 cri.go:89] found id: ""
	I0229 19:01:28.160097   47919 logs.go:276] 0 containers: []
	W0229 19:01:28.160104   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:28.160110   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:28.160180   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:28.200297   47919 cri.go:89] found id: ""
	I0229 19:01:28.200317   47919 logs.go:276] 0 containers: []
	W0229 19:01:28.200325   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:28.200330   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:28.200393   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:28.239912   47919 cri.go:89] found id: ""
	I0229 19:01:28.239944   47919 logs.go:276] 0 containers: []
	W0229 19:01:28.239955   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:28.239963   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:28.240018   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:28.278525   47919 cri.go:89] found id: ""
	I0229 19:01:28.278550   47919 logs.go:276] 0 containers: []
	W0229 19:01:28.278558   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:28.278564   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:28.278617   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:28.315659   47919 cri.go:89] found id: ""
	I0229 19:01:28.315685   47919 logs.go:276] 0 containers: []
	W0229 19:01:28.315693   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:28.315703   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:28.315716   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:28.330102   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:28.330127   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:28.402474   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:28.402497   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:28.402513   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:28.486271   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:28.486308   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:28.531888   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:28.531918   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:31.082385   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:31.122771   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:31.122844   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:31.165097   47919 cri.go:89] found id: ""
	I0229 19:01:31.165127   47919 logs.go:276] 0 containers: []
	W0229 19:01:31.165138   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:31.165148   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:31.165215   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:31.209449   47919 cri.go:89] found id: ""
	I0229 19:01:31.209482   47919 logs.go:276] 0 containers: []
	W0229 19:01:31.209492   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:31.209498   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:31.209559   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:31.249660   47919 cri.go:89] found id: ""
	I0229 19:01:31.249687   47919 logs.go:276] 0 containers: []
	W0229 19:01:31.249698   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:31.249705   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:31.249770   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:31.299268   47919 cri.go:89] found id: ""
	I0229 19:01:31.299292   47919 logs.go:276] 0 containers: []
	W0229 19:01:31.299301   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:31.299308   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:31.299363   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:31.339078   47919 cri.go:89] found id: ""
	I0229 19:01:31.339111   47919 logs.go:276] 0 containers: []
	W0229 19:01:31.339123   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:31.339131   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:31.339194   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:31.378548   47919 cri.go:89] found id: ""
	I0229 19:01:31.378576   47919 logs.go:276] 0 containers: []
	W0229 19:01:31.378587   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:31.378595   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:31.378654   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:31.418744   47919 cri.go:89] found id: ""
	I0229 19:01:31.418780   47919 logs.go:276] 0 containers: []
	W0229 19:01:31.418812   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:31.418824   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:31.418889   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:31.464078   47919 cri.go:89] found id: ""
	I0229 19:01:31.464103   47919 logs.go:276] 0 containers: []
	W0229 19:01:31.464113   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:31.464124   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:31.464138   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:31.516406   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:31.516434   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:31.531504   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:31.531527   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:31.607391   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:31.607413   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:31.607426   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:31.691582   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:31.691609   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:34.233205   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:34.250283   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:34.250345   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:34.294588   47919 cri.go:89] found id: ""
	I0229 19:01:34.294620   47919 logs.go:276] 0 containers: []
	W0229 19:01:34.294631   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:34.294639   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:34.294712   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:34.337033   47919 cri.go:89] found id: ""
	I0229 19:01:34.337061   47919 logs.go:276] 0 containers: []
	W0229 19:01:34.337071   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:34.337079   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:34.337141   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:34.382800   47919 cri.go:89] found id: ""
	I0229 19:01:34.382831   47919 logs.go:276] 0 containers: []
	W0229 19:01:34.382840   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:34.382845   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:34.382904   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:34.422931   47919 cri.go:89] found id: ""
	I0229 19:01:34.422959   47919 logs.go:276] 0 containers: []
	W0229 19:01:34.422970   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:34.422977   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:34.423059   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:34.469724   47919 cri.go:89] found id: ""
	I0229 19:01:34.469755   47919 logs.go:276] 0 containers: []
	W0229 19:01:34.469765   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:34.469773   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:34.469824   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:34.513428   47919 cri.go:89] found id: ""
	I0229 19:01:34.513461   47919 logs.go:276] 0 containers: []
	W0229 19:01:34.513472   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:34.513479   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:34.513555   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:34.552593   47919 cri.go:89] found id: ""
	I0229 19:01:34.552638   47919 logs.go:276] 0 containers: []
	W0229 19:01:34.552648   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:34.552655   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:34.552717   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:34.596516   47919 cri.go:89] found id: ""
	I0229 19:01:34.596538   47919 logs.go:276] 0 containers: []
	W0229 19:01:34.596546   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:34.596554   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:34.596568   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:34.611782   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:34.611805   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:34.694333   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:34.694352   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:34.694368   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:34.781638   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:34.781669   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:34.832910   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:34.832943   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:37.398458   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:37.415617   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:37.415696   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:37.455390   47919 cri.go:89] found id: ""
	I0229 19:01:37.455421   47919 logs.go:276] 0 containers: []
	W0229 19:01:37.455433   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:37.455440   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:37.455501   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:37.498869   47919 cri.go:89] found id: ""
	I0229 19:01:37.498890   47919 logs.go:276] 0 containers: []
	W0229 19:01:37.498901   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:37.498909   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:37.498972   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:37.538928   47919 cri.go:89] found id: ""
	I0229 19:01:37.538952   47919 logs.go:276] 0 containers: []
	W0229 19:01:37.538960   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:37.538966   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:37.539012   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:37.577278   47919 cri.go:89] found id: ""
	I0229 19:01:37.577299   47919 logs.go:276] 0 containers: []
	W0229 19:01:37.577310   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:37.577317   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:37.577372   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:37.620313   47919 cri.go:89] found id: ""
	I0229 19:01:37.620342   47919 logs.go:276] 0 containers: []
	W0229 19:01:37.620352   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:37.620359   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:37.620420   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:37.657696   47919 cri.go:89] found id: ""
	I0229 19:01:37.657717   47919 logs.go:276] 0 containers: []
	W0229 19:01:37.657726   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:37.657734   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:37.657792   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:37.698814   47919 cri.go:89] found id: ""
	I0229 19:01:37.698833   47919 logs.go:276] 0 containers: []
	W0229 19:01:37.698841   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:37.698848   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:37.698902   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:37.736438   47919 cri.go:89] found id: ""
	I0229 19:01:37.736469   47919 logs.go:276] 0 containers: []
	W0229 19:01:37.736480   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:37.736490   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:37.736506   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:37.753849   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:37.753871   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:37.854740   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:37.854764   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:37.854783   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:37.943837   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:37.943872   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:37.988180   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:37.988209   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:40.543133   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:40.558453   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:40.558526   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:40.599794   47919 cri.go:89] found id: ""
	I0229 19:01:40.599814   47919 logs.go:276] 0 containers: []
	W0229 19:01:40.599821   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:40.599827   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:40.599874   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:40.641738   47919 cri.go:89] found id: ""
	I0229 19:01:40.641762   47919 logs.go:276] 0 containers: []
	W0229 19:01:40.641769   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:40.641775   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:40.641819   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:40.683905   47919 cri.go:89] found id: ""
	I0229 19:01:40.683935   47919 logs.go:276] 0 containers: []
	W0229 19:01:40.683945   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:40.683953   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:40.684006   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:40.727645   47919 cri.go:89] found id: ""
	I0229 19:01:40.727675   47919 logs.go:276] 0 containers: []
	W0229 19:01:40.727685   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:40.727693   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:40.727754   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:40.785142   47919 cri.go:89] found id: ""
	I0229 19:01:40.785172   47919 logs.go:276] 0 containers: []
	W0229 19:01:40.785192   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:40.785199   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:40.785252   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:40.854534   47919 cri.go:89] found id: ""
	I0229 19:01:40.854560   47919 logs.go:276] 0 containers: []
	W0229 19:01:40.854571   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:40.854580   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:40.854639   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:40.900823   47919 cri.go:89] found id: ""
	I0229 19:01:40.900851   47919 logs.go:276] 0 containers: []
	W0229 19:01:40.900862   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:40.900869   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:40.900928   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:40.938108   47919 cri.go:89] found id: ""
	I0229 19:01:40.938135   47919 logs.go:276] 0 containers: []
	W0229 19:01:40.938146   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:40.938156   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:40.938171   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:40.987452   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:40.987482   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:41.037388   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:41.037417   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:41.051987   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:41.052015   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:41.126077   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:41.126102   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:41.126116   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:43.715745   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:43.730683   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:43.730755   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:43.790637   47919 cri.go:89] found id: ""
	I0229 19:01:43.790665   47919 logs.go:276] 0 containers: []
	W0229 19:01:43.790676   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:43.790682   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:43.790731   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:43.848237   47919 cri.go:89] found id: ""
	I0229 19:01:43.848263   47919 logs.go:276] 0 containers: []
	W0229 19:01:43.848272   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:43.848277   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:43.848337   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:43.897892   47919 cri.go:89] found id: ""
	I0229 19:01:43.897920   47919 logs.go:276] 0 containers: []
	W0229 19:01:43.897928   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:43.897934   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:43.897989   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:43.936068   47919 cri.go:89] found id: ""
	I0229 19:01:43.936089   47919 logs.go:276] 0 containers: []
	W0229 19:01:43.936097   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:43.936102   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:43.936149   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:43.978636   47919 cri.go:89] found id: ""
	I0229 19:01:43.978670   47919 logs.go:276] 0 containers: []
	W0229 19:01:43.978682   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:43.978689   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:43.978751   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:44.018642   47919 cri.go:89] found id: ""
	I0229 19:01:44.018676   47919 logs.go:276] 0 containers: []
	W0229 19:01:44.018684   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:44.018690   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:44.018737   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:44.056237   47919 cri.go:89] found id: ""
	I0229 19:01:44.056267   47919 logs.go:276] 0 containers: []
	W0229 19:01:44.056278   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:44.056285   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:44.056347   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:44.095489   47919 cri.go:89] found id: ""
	I0229 19:01:44.095522   47919 logs.go:276] 0 containers: []
	W0229 19:01:44.095532   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:44.095543   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:44.095557   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:44.139407   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:44.139433   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:44.189893   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:44.189921   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:44.206426   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:44.206449   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:44.285594   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:44.285621   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:44.285638   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:46.869271   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:46.885267   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:46.885356   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:46.921696   47919 cri.go:89] found id: ""
	I0229 19:01:46.921718   47919 logs.go:276] 0 containers: []
	W0229 19:01:46.921725   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:46.921731   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:46.921789   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:46.960265   47919 cri.go:89] found id: ""
	I0229 19:01:46.960291   47919 logs.go:276] 0 containers: []
	W0229 19:01:46.960302   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:46.960309   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:46.960367   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:46.998035   47919 cri.go:89] found id: ""
	I0229 19:01:46.998062   47919 logs.go:276] 0 containers: []
	W0229 19:01:46.998070   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:46.998075   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:46.998119   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:47.041563   47919 cri.go:89] found id: ""
	I0229 19:01:47.041586   47919 logs.go:276] 0 containers: []
	W0229 19:01:47.041595   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:47.041600   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:47.041643   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:47.084146   47919 cri.go:89] found id: ""
	I0229 19:01:47.084167   47919 logs.go:276] 0 containers: []
	W0229 19:01:47.084174   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:47.084179   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:47.084227   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:47.126813   47919 cri.go:89] found id: ""
	I0229 19:01:47.126835   47919 logs.go:276] 0 containers: []
	W0229 19:01:47.126845   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:47.126853   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:47.126909   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:47.165379   47919 cri.go:89] found id: ""
	I0229 19:01:47.165399   47919 logs.go:276] 0 containers: []
	W0229 19:01:47.165406   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:47.165412   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:47.165454   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:47.204263   47919 cri.go:89] found id: ""
	I0229 19:01:47.204306   47919 logs.go:276] 0 containers: []
	W0229 19:01:47.204316   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:47.204328   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:47.204345   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:47.248848   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:47.248876   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:47.299388   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:47.299416   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:47.314484   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:47.314507   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:47.386231   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:47.386256   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:47.386272   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:49.965988   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:49.980621   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:49.980700   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:50.025010   47919 cri.go:89] found id: ""
	I0229 19:01:50.025030   47919 logs.go:276] 0 containers: []
	W0229 19:01:50.025037   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:50.025042   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:50.025090   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:50.066947   47919 cri.go:89] found id: ""
	I0229 19:01:50.066976   47919 logs.go:276] 0 containers: []
	W0229 19:01:50.066984   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:50.066990   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:50.067061   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:50.108892   47919 cri.go:89] found id: ""
	I0229 19:01:50.108913   47919 logs.go:276] 0 containers: []
	W0229 19:01:50.108931   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:50.108937   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:50.108997   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:50.149601   47919 cri.go:89] found id: ""
	I0229 19:01:50.149626   47919 logs.go:276] 0 containers: []
	W0229 19:01:50.149636   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:50.149643   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:50.149704   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:50.191881   47919 cri.go:89] found id: ""
	I0229 19:01:50.191908   47919 logs.go:276] 0 containers: []
	W0229 19:01:50.191918   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:50.191925   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:50.191987   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:50.233782   47919 cri.go:89] found id: ""
	I0229 19:01:50.233803   47919 logs.go:276] 0 containers: []
	W0229 19:01:50.233811   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:50.233816   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:50.233870   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:50.274913   47919 cri.go:89] found id: ""
	I0229 19:01:50.274941   47919 logs.go:276] 0 containers: []
	W0229 19:01:50.274950   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:50.274955   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:50.275050   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:50.321924   47919 cri.go:89] found id: ""
	I0229 19:01:50.321945   47919 logs.go:276] 0 containers: []
	W0229 19:01:50.321953   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:50.321967   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:50.321978   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:50.367357   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:50.367388   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:50.417229   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:50.417260   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:50.432031   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:50.432056   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:50.504920   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:50.504942   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:50.504960   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:53.110884   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:53.126947   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:53.127004   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:53.166940   47919 cri.go:89] found id: ""
	I0229 19:01:53.166965   47919 logs.go:276] 0 containers: []
	W0229 19:01:53.166975   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:53.166982   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:53.167054   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:53.205917   47919 cri.go:89] found id: ""
	I0229 19:01:53.205960   47919 logs.go:276] 0 containers: []
	W0229 19:01:53.205968   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:53.205974   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:53.206030   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:53.245547   47919 cri.go:89] found id: ""
	I0229 19:01:53.245577   47919 logs.go:276] 0 containers: []
	W0229 19:01:53.245587   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:53.245595   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:53.245654   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:53.287513   47919 cri.go:89] found id: ""
	I0229 19:01:53.287540   47919 logs.go:276] 0 containers: []
	W0229 19:01:53.287550   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:53.287557   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:53.287617   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:53.329269   47919 cri.go:89] found id: ""
	I0229 19:01:53.329299   47919 logs.go:276] 0 containers: []
	W0229 19:01:53.329310   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:53.329318   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:53.329379   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:53.377438   47919 cri.go:89] found id: ""
	I0229 19:01:53.377467   47919 logs.go:276] 0 containers: []
	W0229 19:01:53.377478   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:53.377485   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:53.377549   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:53.418414   47919 cri.go:89] found id: ""
	I0229 19:01:53.418440   47919 logs.go:276] 0 containers: []
	W0229 19:01:53.418448   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:53.418453   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:53.418514   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:53.458365   47919 cri.go:89] found id: ""
	I0229 19:01:53.458393   47919 logs.go:276] 0 containers: []
	W0229 19:01:53.458402   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:53.458409   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:53.458421   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:53.540710   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:53.540744   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:53.637271   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:53.637302   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:53.687822   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:53.687850   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:53.703482   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:53.703506   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:53.779564   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:56.280300   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:56.295210   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:56.295295   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:56.336903   47919 cri.go:89] found id: ""
	I0229 19:01:56.336935   47919 logs.go:276] 0 containers: []
	W0229 19:01:56.336945   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:56.336953   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:56.337002   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:56.373300   47919 cri.go:89] found id: ""
	I0229 19:01:56.373322   47919 logs.go:276] 0 containers: []
	W0229 19:01:56.373330   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:56.373338   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:56.373390   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:56.411949   47919 cri.go:89] found id: ""
	I0229 19:01:56.411975   47919 logs.go:276] 0 containers: []
	W0229 19:01:56.411984   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:56.411990   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:56.412047   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:56.453302   47919 cri.go:89] found id: ""
	I0229 19:01:56.453329   47919 logs.go:276] 0 containers: []
	W0229 19:01:56.453339   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:56.453344   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:56.453403   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:56.490543   47919 cri.go:89] found id: ""
	I0229 19:01:56.490565   47919 logs.go:276] 0 containers: []
	W0229 19:01:56.490576   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:56.490582   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:56.490637   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:56.547078   47919 cri.go:89] found id: ""
	I0229 19:01:56.547101   47919 logs.go:276] 0 containers: []
	W0229 19:01:56.547108   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:56.547113   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:56.547171   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:56.598382   47919 cri.go:89] found id: ""
	I0229 19:01:56.598408   47919 logs.go:276] 0 containers: []
	W0229 19:01:56.598417   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:56.598424   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:56.598478   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:56.646090   47919 cri.go:89] found id: ""
	I0229 19:01:56.646117   47919 logs.go:276] 0 containers: []
	W0229 19:01:56.646125   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:56.646134   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:56.646145   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:56.691685   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:56.691711   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:56.742886   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:56.742927   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:56.758326   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:56.758350   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:56.830140   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:56.830160   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:56.830177   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:59.414437   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:59.429710   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:59.429793   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:59.473993   47919 cri.go:89] found id: ""
	I0229 19:01:59.474018   47919 logs.go:276] 0 containers: []
	W0229 19:01:59.474025   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:59.474031   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:59.474091   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:59.529114   47919 cri.go:89] found id: ""
	I0229 19:01:59.529143   47919 logs.go:276] 0 containers: []
	W0229 19:01:59.529157   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:59.529164   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:59.529222   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:59.596624   47919 cri.go:89] found id: ""
	I0229 19:01:59.596654   47919 logs.go:276] 0 containers: []
	W0229 19:01:59.596665   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:59.596672   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:59.596730   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:59.641088   47919 cri.go:89] found id: ""
	I0229 19:01:59.641118   47919 logs.go:276] 0 containers: []
	W0229 19:01:59.641130   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:59.641138   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:59.641198   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:59.682294   47919 cri.go:89] found id: ""
	I0229 19:01:59.682318   47919 logs.go:276] 0 containers: []
	W0229 19:01:59.682327   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:59.682333   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:59.682406   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:59.722881   47919 cri.go:89] found id: ""
	I0229 19:01:59.722902   47919 logs.go:276] 0 containers: []
	W0229 19:01:59.722910   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:59.722915   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:59.722982   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:59.761727   47919 cri.go:89] found id: ""
	I0229 19:01:59.761757   47919 logs.go:276] 0 containers: []
	W0229 19:01:59.761767   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:59.761778   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:59.761839   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:59.805733   47919 cri.go:89] found id: ""
	I0229 19:01:59.805762   47919 logs.go:276] 0 containers: []
	W0229 19:01:59.805772   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:59.805783   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:59.805798   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:59.883702   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:59.883721   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:59.883733   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:59.960649   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:59.960682   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:02:00.012085   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:02:00.012121   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:02:00.065794   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:02:00.065834   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:02:02.583319   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:02:02.603123   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:02:02.603178   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:02:02.654992   47919 cri.go:89] found id: ""
	I0229 19:02:02.655017   47919 logs.go:276] 0 containers: []
	W0229 19:02:02.655046   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:02:02.655053   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:02:02.655103   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:02:02.697067   47919 cri.go:89] found id: ""
	I0229 19:02:02.697098   47919 logs.go:276] 0 containers: []
	W0229 19:02:02.697109   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:02:02.697116   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:02:02.697178   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:02:02.734804   47919 cri.go:89] found id: ""
	I0229 19:02:02.734828   47919 logs.go:276] 0 containers: []
	W0229 19:02:02.734835   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:02:02.734841   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:02:02.734893   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:02:02.778292   47919 cri.go:89] found id: ""
	I0229 19:02:02.778313   47919 logs.go:276] 0 containers: []
	W0229 19:02:02.778321   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:02:02.778328   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:02:02.778382   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:02:02.819431   47919 cri.go:89] found id: ""
	I0229 19:02:02.819458   47919 logs.go:276] 0 containers: []
	W0229 19:02:02.819470   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:02:02.819478   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:02:02.819537   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:02:02.862409   47919 cri.go:89] found id: ""
	I0229 19:02:02.862432   47919 logs.go:276] 0 containers: []
	W0229 19:02:02.862439   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:02:02.862445   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:02:02.862487   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:02:02.902486   47919 cri.go:89] found id: ""
	I0229 19:02:02.902513   47919 logs.go:276] 0 containers: []
	W0229 19:02:02.902521   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:02:02.902526   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:02:02.902571   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:02:02.939408   47919 cri.go:89] found id: ""
	I0229 19:02:02.939436   47919 logs.go:276] 0 containers: []
	W0229 19:02:02.939443   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:02:02.939451   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:02:02.939462   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:02:02.954539   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:02:02.954564   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:02:03.032534   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:02:03.032556   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:02:03.032574   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:02:03.116064   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:02:03.116096   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:02:03.167242   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:02:03.167265   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:02:05.718312   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:02:05.732879   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:02:05.733012   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:02:05.774525   47919 cri.go:89] found id: ""
	I0229 19:02:05.774557   47919 logs.go:276] 0 containers: []
	W0229 19:02:05.774569   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:02:05.774577   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:02:05.774640   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:02:05.817870   47919 cri.go:89] found id: ""
	I0229 19:02:05.817900   47919 logs.go:276] 0 containers: []
	W0229 19:02:05.817912   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:02:05.817919   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:02:05.817998   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:02:05.859533   47919 cri.go:89] found id: ""
	I0229 19:02:05.859565   47919 logs.go:276] 0 containers: []
	W0229 19:02:05.859579   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:02:05.859587   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:02:05.859646   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:02:05.904971   47919 cri.go:89] found id: ""
	I0229 19:02:05.905003   47919 logs.go:276] 0 containers: []
	W0229 19:02:05.905014   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:02:05.905021   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:02:05.905086   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:02:05.950431   47919 cri.go:89] found id: ""
	I0229 19:02:05.950459   47919 logs.go:276] 0 containers: []
	W0229 19:02:05.950470   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:02:05.950478   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:02:05.950546   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:02:05.999464   47919 cri.go:89] found id: ""
	I0229 19:02:05.999489   47919 logs.go:276] 0 containers: []
	W0229 19:02:05.999500   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:02:05.999508   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:02:05.999588   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:02:06.045086   47919 cri.go:89] found id: ""
	I0229 19:02:06.045117   47919 logs.go:276] 0 containers: []
	W0229 19:02:06.045133   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:02:06.045140   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:02:06.045203   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:02:06.091542   47919 cri.go:89] found id: ""
	I0229 19:02:06.091571   47919 logs.go:276] 0 containers: []
	W0229 19:02:06.091583   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:02:06.091592   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:02:06.091607   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:02:06.156524   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:02:06.156558   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:02:06.174941   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:02:06.174965   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:02:06.260443   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:02:06.260467   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:02:06.260483   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:02:06.377415   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:02:06.377457   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:02:08.931407   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:02:08.946035   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:02:08.946108   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:02:08.989299   47919 cri.go:89] found id: ""
	I0229 19:02:08.989326   47919 logs.go:276] 0 containers: []
	W0229 19:02:08.989338   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:02:08.989345   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:02:08.989405   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:02:09.033634   47919 cri.go:89] found id: ""
	I0229 19:02:09.033664   47919 logs.go:276] 0 containers: []
	W0229 19:02:09.033677   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:02:09.033684   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:02:09.033745   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:02:09.084381   47919 cri.go:89] found id: ""
	I0229 19:02:09.084406   47919 logs.go:276] 0 containers: []
	W0229 19:02:09.084435   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:02:09.084442   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:02:09.084507   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:02:09.132526   47919 cri.go:89] found id: ""
	I0229 19:02:09.132555   47919 logs.go:276] 0 containers: []
	W0229 19:02:09.132573   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:02:09.132581   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:02:09.132644   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:02:09.182655   47919 cri.go:89] found id: ""
	I0229 19:02:09.182684   47919 logs.go:276] 0 containers: []
	W0229 19:02:09.182694   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:02:09.182701   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:02:09.182764   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:02:09.223164   47919 cri.go:89] found id: ""
	I0229 19:02:09.223191   47919 logs.go:276] 0 containers: []
	W0229 19:02:09.223202   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:02:09.223210   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:02:09.223267   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:02:09.271882   47919 cri.go:89] found id: ""
	I0229 19:02:09.271908   47919 logs.go:276] 0 containers: []
	W0229 19:02:09.271926   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:02:09.271934   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:02:09.271992   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:02:09.331796   47919 cri.go:89] found id: ""
	I0229 19:02:09.331826   47919 logs.go:276] 0 containers: []
	W0229 19:02:09.331837   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:02:09.331847   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:02:09.331860   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:02:09.398969   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:02:09.399009   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:02:09.418992   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:02:09.419040   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:02:09.503358   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:02:09.503381   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:02:09.503394   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:02:09.612549   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:02:09.612586   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:02:12.162138   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:02:12.175827   47919 kubeadm.go:640] restartCluster took 4m14.562960798s
	W0229 19:02:12.175902   47919 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0229 19:02:12.175940   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0229 19:02:12.639231   47919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 19:02:12.658353   47919 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 19:02:12.671552   47919 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 19:02:12.684278   47919 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 19:02:12.684323   47919 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 19:02:12.903644   47919 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 19:04:08.955017   47919 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 19:04:08.955134   47919 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 19:04:08.956493   47919 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 19:04:08.956586   47919 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 19:04:08.956684   47919 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 19:04:08.956809   47919 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 19:04:08.956955   47919 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 19:04:08.957116   47919 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 19:04:08.957253   47919 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 19:04:08.957304   47919 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 19:04:08.957375   47919 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 19:04:08.959231   47919 out.go:204]   - Generating certificates and keys ...
	I0229 19:04:08.959317   47919 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 19:04:08.959429   47919 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 19:04:08.959550   47919 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 19:04:08.959637   47919 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 19:04:08.959745   47919 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 19:04:08.959792   47919 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 19:04:08.959851   47919 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 19:04:08.959934   47919 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 19:04:08.960022   47919 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 19:04:08.960099   47919 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 19:04:08.960159   47919 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 19:04:08.960227   47919 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 19:04:08.960303   47919 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 19:04:08.960349   47919 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 19:04:08.960403   47919 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 19:04:08.960462   47919 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 19:04:08.960540   47919 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 19:04:08.962078   47919 out.go:204]   - Booting up control plane ...
	I0229 19:04:08.962181   47919 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 19:04:08.962279   47919 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 19:04:08.962361   47919 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 19:04:08.962470   47919 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 19:04:08.962646   47919 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 19:04:08.962689   47919 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 19:04:08.962777   47919 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 19:04:08.962968   47919 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 19:04:08.963056   47919 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 19:04:08.963331   47919 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 19:04:08.963436   47919 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 19:04:08.963646   47919 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 19:04:08.963761   47919 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 19:04:08.963949   47919 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 19:04:08.964053   47919 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 19:04:08.964273   47919 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 19:04:08.964281   47919 kubeadm.go:322] 
	I0229 19:04:08.964313   47919 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 19:04:08.964351   47919 kubeadm.go:322] 	timed out waiting for the condition
	I0229 19:04:08.964358   47919 kubeadm.go:322] 
	I0229 19:04:08.964385   47919 kubeadm.go:322] This error is likely caused by:
	I0229 19:04:08.964441   47919 kubeadm.go:322] 	- The kubelet is not running
	I0229 19:04:08.964547   47919 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 19:04:08.964560   47919 kubeadm.go:322] 
	I0229 19:04:08.964684   47919 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 19:04:08.964734   47919 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 19:04:08.964780   47919 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 19:04:08.964789   47919 kubeadm.go:322] 
	I0229 19:04:08.964922   47919 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 19:04:08.965053   47919 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 19:04:08.965180   47919 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 19:04:08.965255   47919 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 19:04:08.965342   47919 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 19:04:08.965438   47919 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	W0229 19:04:08.965475   47919 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0229 19:04:08.965520   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0229 19:04:09.441915   47919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 19:04:09.459807   47919 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 19:04:09.471061   47919 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 19:04:09.471099   47919 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 19:04:09.532830   47919 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 19:04:09.532979   47919 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 19:04:09.673720   47919 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 19:04:09.673884   47919 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 19:04:09.674071   47919 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 19:04:09.905201   47919 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 19:04:09.906612   47919 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 19:04:09.915393   47919 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 19:04:10.035443   47919 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 19:04:10.037103   47919 out.go:204]   - Generating certificates and keys ...
	I0229 19:04:10.037203   47919 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 19:04:10.037335   47919 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 19:04:10.037453   47919 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 19:04:10.037558   47919 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 19:04:10.037689   47919 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 19:04:10.037832   47919 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 19:04:10.038465   47919 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 19:04:10.038932   47919 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 19:04:10.039471   47919 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 19:04:10.039874   47919 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 19:04:10.039961   47919 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 19:04:10.040045   47919 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 19:04:10.157741   47919 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 19:04:10.426271   47919 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 19:04:10.528768   47919 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 19:04:10.595099   47919 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 19:04:10.596020   47919 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 19:04:10.597781   47919 out.go:204]   - Booting up control plane ...
	I0229 19:04:10.597872   47919 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 19:04:10.602307   47919 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 19:04:10.603371   47919 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 19:04:10.604660   47919 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 19:04:10.607876   47919 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 19:04:50.609556   47919 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 19:04:50.610341   47919 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 19:04:50.610592   47919 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 19:04:55.610941   47919 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 19:04:55.611235   47919 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 19:05:05.611726   47919 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 19:05:05.611996   47919 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 19:05:25.612622   47919 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 19:05:25.612856   47919 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 19:06:05.613204   47919 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 19:06:05.613467   47919 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 19:06:05.613495   47919 kubeadm.go:322] 
	I0229 19:06:05.613547   47919 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 19:06:05.613598   47919 kubeadm.go:322] 	timed out waiting for the condition
	I0229 19:06:05.613608   47919 kubeadm.go:322] 
	I0229 19:06:05.613653   47919 kubeadm.go:322] This error is likely caused by:
	I0229 19:06:05.613694   47919 kubeadm.go:322] 	- The kubelet is not running
	I0229 19:06:05.613814   47919 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 19:06:05.613823   47919 kubeadm.go:322] 
	I0229 19:06:05.613911   47919 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 19:06:05.613941   47919 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 19:06:05.613974   47919 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 19:06:05.613980   47919 kubeadm.go:322] 
	I0229 19:06:05.614107   47919 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 19:06:05.614240   47919 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 19:06:05.614361   47919 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 19:06:05.614432   47919 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 19:06:05.614533   47919 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 19:06:05.614577   47919 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0229 19:06:05.615575   47919 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 19:06:05.615689   47919 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 19:06:05.615765   47919 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 19:06:05.615822   47919 kubeadm.go:406] StartCluster complete in 8m8.067253054s
	I0229 19:06:05.615873   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:06:05.615920   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:06:05.671959   47919 cri.go:89] found id: ""
	I0229 19:06:05.671998   47919 logs.go:276] 0 containers: []
	W0229 19:06:05.672018   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:06:05.672025   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:06:05.672075   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:06:05.715832   47919 cri.go:89] found id: ""
	I0229 19:06:05.715853   47919 logs.go:276] 0 containers: []
	W0229 19:06:05.715860   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:06:05.715866   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:06:05.715911   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:06:05.755305   47919 cri.go:89] found id: ""
	I0229 19:06:05.755334   47919 logs.go:276] 0 containers: []
	W0229 19:06:05.755345   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:06:05.755351   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:06:05.755409   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:06:05.807907   47919 cri.go:89] found id: ""
	I0229 19:06:05.807938   47919 logs.go:276] 0 containers: []
	W0229 19:06:05.807950   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:06:05.807957   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:06:05.808015   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:06:05.892777   47919 cri.go:89] found id: ""
	I0229 19:06:05.892805   47919 logs.go:276] 0 containers: []
	W0229 19:06:05.892813   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:06:05.892818   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:06:05.892877   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:06:05.931488   47919 cri.go:89] found id: ""
	I0229 19:06:05.931516   47919 logs.go:276] 0 containers: []
	W0229 19:06:05.931527   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:06:05.931534   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:06:05.931578   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:06:05.971989   47919 cri.go:89] found id: ""
	I0229 19:06:05.972018   47919 logs.go:276] 0 containers: []
	W0229 19:06:05.972030   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:06:05.972037   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:06:05.972112   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:06:06.008174   47919 cri.go:89] found id: ""
	I0229 19:06:06.008198   47919 logs.go:276] 0 containers: []
	W0229 19:06:06.008208   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:06:06.008224   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:06:06.008241   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:06:06.024924   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:06:06.024953   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:06:06.111879   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:06:06.111904   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:06:06.111918   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:06:06.221563   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:06:06.221593   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:06:06.266861   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:06:06.266897   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 19:06:06.314923   47919 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0229 19:06:06.314971   47919 out.go:239] * 
	* 
	W0229 19:06:06.315043   47919 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 19:06:06.315065   47919 out.go:239] * 
	* 
	W0229 19:06:06.315824   47919 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 19:06:06.318988   47919 out.go:177] 
	W0229 19:06:06.320200   47919 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 19:06:06.320245   47919 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0229 19:06:06.320270   47919 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0229 19:06:06.321598   47919 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-631080 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-631080 -n old-k8s-version-631080
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-631080 -n old-k8s-version-631080: exit status 2 (253.366255ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-631080 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-631080 logs -n 25: (1.599849183s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p kubernetes-upgrade-541086                           | kubernetes-upgrade-541086    | jenkins | v1.32.0 | 29 Feb 24 18:47 UTC | 29 Feb 24 18:47 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-541086                           | kubernetes-upgrade-541086    | jenkins | v1.32.0 | 29 Feb 24 18:47 UTC | 29 Feb 24 18:47 UTC |
	| start   | -p no-preload-247197                                   | no-preload-247197            | jenkins | v1.32.0 | 29 Feb 24 18:47 UTC | 29 Feb 24 18:49 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p pause-848791                                        | pause-848791                 | jenkins | v1.32.0 | 29 Feb 24 18:48 UTC | 29 Feb 24 18:48 UTC |
	| start   | -p embed-certs-991128                                  | embed-certs-991128           | jenkins | v1.32.0 | 29 Feb 24 18:48 UTC | 29 Feb 24 18:49 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-393248                              | cert-expiration-393248       | jenkins | v1.32.0 | 29 Feb 24 18:49 UTC | 29 Feb 24 18:49 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-393248                              | cert-expiration-393248       | jenkins | v1.32.0 | 29 Feb 24 18:49 UTC | 29 Feb 24 18:49 UTC |
	| delete  | -p                                                     | disable-driver-mounts-599421 | jenkins | v1.32.0 | 29 Feb 24 18:49 UTC | 29 Feb 24 18:49 UTC |
	|         | disable-driver-mounts-599421                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-153528 | jenkins | v1.32.0 | 29 Feb 24 18:49 UTC | 29 Feb 24 18:50 UTC |
	|         | default-k8s-diff-port-153528                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-247197             | no-preload-247197            | jenkins | v1.32.0 | 29 Feb 24 18:49 UTC | 29 Feb 24 18:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-247197                                   | no-preload-247197            | jenkins | v1.32.0 | 29 Feb 24 18:49 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-991128            | embed-certs-991128           | jenkins | v1.32.0 | 29 Feb 24 18:50 UTC | 29 Feb 24 18:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-991128                                  | embed-certs-991128           | jenkins | v1.32.0 | 29 Feb 24 18:50 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-153528  | default-k8s-diff-port-153528 | jenkins | v1.32.0 | 29 Feb 24 18:51 UTC | 29 Feb 24 18:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-153528 | jenkins | v1.32.0 | 29 Feb 24 18:51 UTC |                     |
	|         | default-k8s-diff-port-153528                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-631080        | old-k8s-version-631080       | jenkins | v1.32.0 | 29 Feb 24 18:51 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-247197                  | no-preload-247197            | jenkins | v1.32.0 | 29 Feb 24 18:52 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-991128                 | embed-certs-991128           | jenkins | v1.32.0 | 29 Feb 24 18:52 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-247197                                   | no-preload-247197            | jenkins | v1.32.0 | 29 Feb 24 18:52 UTC |                     |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| start   | -p embed-certs-991128                                  | embed-certs-991128           | jenkins | v1.32.0 | 29 Feb 24 18:52 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-631080                              | old-k8s-version-631080       | jenkins | v1.32.0 | 29 Feb 24 18:53 UTC | 29 Feb 24 18:53 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-631080             | old-k8s-version-631080       | jenkins | v1.32.0 | 29 Feb 24 18:53 UTC | 29 Feb 24 18:53 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-631080                              | old-k8s-version-631080       | jenkins | v1.32.0 | 29 Feb 24 18:53 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-153528       | default-k8s-diff-port-153528 | jenkins | v1.32.0 | 29 Feb 24 18:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-153528 | jenkins | v1.32.0 | 29 Feb 24 18:53 UTC |                     |
	|         | default-k8s-diff-port-153528                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 18:53:39
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 18:53:39.272407   48088 out.go:291] Setting OutFile to fd 1 ...
	I0229 18:53:39.272662   48088 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:53:39.272672   48088 out.go:304] Setting ErrFile to fd 2...
	I0229 18:53:39.272676   48088 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:53:39.272900   48088 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6428/.minikube/bin
	I0229 18:53:39.273517   48088 out.go:298] Setting JSON to false
	I0229 18:53:39.274405   48088 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5763,"bootTime":1709227056,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 18:53:39.274466   48088 start.go:139] virtualization: kvm guest
	I0229 18:53:39.276633   48088 out.go:177] * [default-k8s-diff-port-153528] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 18:53:39.278195   48088 out.go:177]   - MINIKUBE_LOCATION=18259
	I0229 18:53:39.278144   48088 notify.go:220] Checking for updates...
	I0229 18:53:39.280040   48088 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 18:53:39.281568   48088 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18259-6428/kubeconfig
	I0229 18:53:39.282972   48088 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6428/.minikube
	I0229 18:53:39.284383   48088 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 18:53:39.285858   48088 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 18:53:39.287467   48088 config.go:182] Loaded profile config "default-k8s-diff-port-153528": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 18:53:39.287851   48088 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 18:53:39.287889   48088 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:53:39.302503   48088 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39523
	I0229 18:53:39.302895   48088 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:53:39.303402   48088 main.go:141] libmachine: Using API Version  1
	I0229 18:53:39.303427   48088 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:53:39.303737   48088 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:53:39.303893   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .DriverName
	I0229 18:53:39.304118   48088 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 18:53:39.304507   48088 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 18:53:39.304554   48088 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:53:39.318572   48088 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41347
	I0229 18:53:39.318978   48088 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:53:39.319454   48088 main.go:141] libmachine: Using API Version  1
	I0229 18:53:39.319482   48088 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:53:39.319748   48088 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:53:39.319924   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .DriverName
	I0229 18:53:39.351526   48088 out.go:177] * Using the kvm2 driver based on existing profile
	I0229 18:53:39.352970   48088 start.go:299] selected driver: kvm2
	I0229 18:53:39.352988   48088 start.go:903] validating driver "kvm2" against &{Name:default-k8s-diff-port-153528 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-153528 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.210 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:53:39.353115   48088 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 18:53:39.353788   48088 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:53:39.353869   48088 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18259-6428/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 18:53:39.369184   48088 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 18:53:39.369569   48088 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0229 18:53:39.369647   48088 cni.go:84] Creating CNI manager for ""
	I0229 18:53:39.369664   48088 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 18:53:39.369679   48088 start_flags.go:323] config:
	{Name:default-k8s-diff-port-153528 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-15352
8 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.210 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:53:39.369878   48088 iso.go:125] acquiring lock: {Name:mk4b55cee5696fff78a1389276e4e011ad655e56 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:53:39.372634   48088 out.go:177] * Starting control plane node default-k8s-diff-port-153528 in cluster default-k8s-diff-port-153528
	I0229 18:53:41.043270   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:53:39.373930   48088 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0229 18:53:39.373998   48088 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0229 18:53:39.374011   48088 cache.go:56] Caching tarball of preloaded images
	I0229 18:53:39.374104   48088 preload.go:174] Found /home/jenkins/minikube-integration/18259-6428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0229 18:53:39.374116   48088 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0229 18:53:39.374227   48088 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/default-k8s-diff-port-153528/config.json ...
	I0229 18:53:39.374456   48088 start.go:365] acquiring machines lock for default-k8s-diff-port-153528: {Name:mk08b242c6b6b7cd4a6843a7d3821a8b435540dd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 18:53:44.115305   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:53:50.195317   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:53:53.267316   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:53:59.347225   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:54:02.419258   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:54:08.499302   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:54:11.571267   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:54:17.651296   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:54:20.723290   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:54:26.803304   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:54:29.875293   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:54:35.955253   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:54:39.027319   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:54:45.107197   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:54:48.179318   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:54:54.259261   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:54:57.331310   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:55:03.411271   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:55:06.483320   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:55:12.563270   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:55:15.635250   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:55:21.715338   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:55:24.787238   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:55:30.867305   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:55:33.939296   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:55:40.019217   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:55:43.091236   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:55:49.171281   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:55:52.243241   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:55:58.323315   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:56:01.395368   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:56:07.475286   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:56:10.547288   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:56:16.627301   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:56:19.699291   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:56:25.779304   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:56:28.851346   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:56:34.931303   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:56:38.003301   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:56:44.083295   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:56:47.155306   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:56:53.235287   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:56:56.307311   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:57:02.387296   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:57:05.391079   47608 start.go:369] acquired machines lock for "embed-certs-991128" in 4m30.01926313s
	I0229 18:57:05.391125   47608 start.go:96] Skipping create...Using existing machine configuration
	I0229 18:57:05.391130   47608 fix.go:54] fixHost starting: 
	I0229 18:57:05.391473   47608 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 18:57:05.391502   47608 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:57:05.406385   47608 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38019
	I0229 18:57:05.406855   47608 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:57:05.407342   47608 main.go:141] libmachine: Using API Version  1
	I0229 18:57:05.407366   47608 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:57:05.407730   47608 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:57:05.407939   47608 main.go:141] libmachine: (embed-certs-991128) Calling .DriverName
	I0229 18:57:05.408088   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetState
	I0229 18:57:05.409862   47608 fix.go:102] recreateIfNeeded on embed-certs-991128: state=Stopped err=<nil>
	I0229 18:57:05.409895   47608 main.go:141] libmachine: (embed-certs-991128) Calling .DriverName
	W0229 18:57:05.410005   47608 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 18:57:05.411812   47608 out.go:177] * Restarting existing kvm2 VM for "embed-certs-991128" ...
	I0229 18:57:05.389096   47515 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 18:57:05.389139   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHHostname
	I0229 18:57:05.390953   47515 machine.go:91] provisioned docker machine in 4m37.390712428s
	I0229 18:57:05.390991   47515 fix.go:56] fixHost completed within 4m37.410903519s
	I0229 18:57:05.390997   47515 start.go:83] releasing machines lock for "no-preload-247197", held for 4m37.410926595s
	W0229 18:57:05.391017   47515 start.go:694] error starting host: provision: host is not running
	W0229 18:57:05.391155   47515 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0229 18:57:05.391169   47515 start.go:709] Will try again in 5 seconds ...
	I0229 18:57:05.413295   47608 main.go:141] libmachine: (embed-certs-991128) Calling .Start
	I0229 18:57:05.413478   47608 main.go:141] libmachine: (embed-certs-991128) Ensuring networks are active...
	I0229 18:57:05.414184   47608 main.go:141] libmachine: (embed-certs-991128) Ensuring network default is active
	I0229 18:57:05.414495   47608 main.go:141] libmachine: (embed-certs-991128) Ensuring network mk-embed-certs-991128 is active
	I0229 18:57:05.414834   47608 main.go:141] libmachine: (embed-certs-991128) Getting domain xml...
	I0229 18:57:05.415508   47608 main.go:141] libmachine: (embed-certs-991128) Creating domain...
	I0229 18:57:06.606675   47608 main.go:141] libmachine: (embed-certs-991128) Waiting to get IP...
	I0229 18:57:06.607445   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:06.607771   47608 main.go:141] libmachine: (embed-certs-991128) DBG | unable to find current IP address of domain embed-certs-991128 in network mk-embed-certs-991128
	I0229 18:57:06.607826   47608 main.go:141] libmachine: (embed-certs-991128) DBG | I0229 18:57:06.607762   48607 retry.go:31] will retry after 250.745087ms: waiting for machine to come up
	I0229 18:57:06.860293   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:06.860711   47608 main.go:141] libmachine: (embed-certs-991128) DBG | unable to find current IP address of domain embed-certs-991128 in network mk-embed-certs-991128
	I0229 18:57:06.860738   47608 main.go:141] libmachine: (embed-certs-991128) DBG | I0229 18:57:06.860671   48607 retry.go:31] will retry after 259.096096ms: waiting for machine to come up
	I0229 18:57:07.121033   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:07.121429   47608 main.go:141] libmachine: (embed-certs-991128) DBG | unable to find current IP address of domain embed-certs-991128 in network mk-embed-certs-991128
	I0229 18:57:07.121458   47608 main.go:141] libmachine: (embed-certs-991128) DBG | I0229 18:57:07.121381   48607 retry.go:31] will retry after 318.126905ms: waiting for machine to come up
	I0229 18:57:07.440859   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:07.441299   47608 main.go:141] libmachine: (embed-certs-991128) DBG | unable to find current IP address of domain embed-certs-991128 in network mk-embed-certs-991128
	I0229 18:57:07.441328   47608 main.go:141] libmachine: (embed-certs-991128) DBG | I0229 18:57:07.441243   48607 retry.go:31] will retry after 570.321317ms: waiting for machine to come up
	I0229 18:57:08.012896   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:08.013331   47608 main.go:141] libmachine: (embed-certs-991128) DBG | unable to find current IP address of domain embed-certs-991128 in network mk-embed-certs-991128
	I0229 18:57:08.013367   47608 main.go:141] libmachine: (embed-certs-991128) DBG | I0229 18:57:08.013295   48607 retry.go:31] will retry after 489.540139ms: waiting for machine to come up
	I0229 18:57:08.503916   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:08.504321   47608 main.go:141] libmachine: (embed-certs-991128) DBG | unable to find current IP address of domain embed-certs-991128 in network mk-embed-certs-991128
	I0229 18:57:08.504358   47608 main.go:141] libmachine: (embed-certs-991128) DBG | I0229 18:57:08.504269   48607 retry.go:31] will retry after 929.011093ms: waiting for machine to come up
	I0229 18:57:09.435395   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:09.435803   47608 main.go:141] libmachine: (embed-certs-991128) DBG | unable to find current IP address of domain embed-certs-991128 in network mk-embed-certs-991128
	I0229 18:57:09.435851   47608 main.go:141] libmachine: (embed-certs-991128) DBG | I0229 18:57:09.435761   48607 retry.go:31] will retry after 1.087849565s: waiting for machine to come up
	I0229 18:57:10.391806   47515 start.go:365] acquiring machines lock for no-preload-247197: {Name:mk08b242c6b6b7cd4a6843a7d3821a8b435540dd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 18:57:10.525247   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:10.525663   47608 main.go:141] libmachine: (embed-certs-991128) DBG | unable to find current IP address of domain embed-certs-991128 in network mk-embed-certs-991128
	I0229 18:57:10.525697   47608 main.go:141] libmachine: (embed-certs-991128) DBG | I0229 18:57:10.525612   48607 retry.go:31] will retry after 954.10405ms: waiting for machine to come up
	I0229 18:57:11.481162   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:11.481610   47608 main.go:141] libmachine: (embed-certs-991128) DBG | unable to find current IP address of domain embed-certs-991128 in network mk-embed-certs-991128
	I0229 18:57:11.481640   47608 main.go:141] libmachine: (embed-certs-991128) DBG | I0229 18:57:11.481558   48607 retry.go:31] will retry after 1.495484693s: waiting for machine to come up
	I0229 18:57:12.979123   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:12.979547   47608 main.go:141] libmachine: (embed-certs-991128) DBG | unable to find current IP address of domain embed-certs-991128 in network mk-embed-certs-991128
	I0229 18:57:12.979572   47608 main.go:141] libmachine: (embed-certs-991128) DBG | I0229 18:57:12.979499   48607 retry.go:31] will retry after 2.307927756s: waiting for machine to come up
	I0229 18:57:15.288445   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:15.288841   47608 main.go:141] libmachine: (embed-certs-991128) DBG | unable to find current IP address of domain embed-certs-991128 in network mk-embed-certs-991128
	I0229 18:57:15.288871   47608 main.go:141] libmachine: (embed-certs-991128) DBG | I0229 18:57:15.288785   48607 retry.go:31] will retry after 2.89615753s: waiting for machine to come up
	I0229 18:57:18.188102   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:18.188474   47608 main.go:141] libmachine: (embed-certs-991128) DBG | unable to find current IP address of domain embed-certs-991128 in network mk-embed-certs-991128
	I0229 18:57:18.188504   47608 main.go:141] libmachine: (embed-certs-991128) DBG | I0229 18:57:18.188426   48607 retry.go:31] will retry after 3.511036368s: waiting for machine to come up
	I0229 18:57:21.701039   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:21.701395   47608 main.go:141] libmachine: (embed-certs-991128) DBG | unable to find current IP address of domain embed-certs-991128 in network mk-embed-certs-991128
	I0229 18:57:21.701425   47608 main.go:141] libmachine: (embed-certs-991128) DBG | I0229 18:57:21.701356   48607 retry.go:31] will retry after 3.516537008s: waiting for machine to come up
	I0229 18:57:25.220199   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:25.220641   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has current primary IP address 192.168.61.34 and MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:25.220655   47608 main.go:141] libmachine: (embed-certs-991128) Found IP for machine: 192.168.61.34
	I0229 18:57:25.220663   47608 main.go:141] libmachine: (embed-certs-991128) Reserving static IP address...
	I0229 18:57:25.221122   47608 main.go:141] libmachine: (embed-certs-991128) Reserved static IP address: 192.168.61.34
	I0229 18:57:25.221162   47608 main.go:141] libmachine: (embed-certs-991128) DBG | found host DHCP lease matching {name: "embed-certs-991128", mac: "52:54:00:44:76:e2", ip: "192.168.61.34"} in network mk-embed-certs-991128: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:29 +0000 UTC Type:0 Mac:52:54:00:44:76:e2 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:embed-certs-991128 Clientid:01:52:54:00:44:76:e2}
	I0229 18:57:25.221179   47608 main.go:141] libmachine: (embed-certs-991128) Waiting for SSH to be available...
	I0229 18:57:25.221222   47608 main.go:141] libmachine: (embed-certs-991128) DBG | skip adding static IP to network mk-embed-certs-991128 - found existing host DHCP lease matching {name: "embed-certs-991128", mac: "52:54:00:44:76:e2", ip: "192.168.61.34"}
	I0229 18:57:25.221243   47608 main.go:141] libmachine: (embed-certs-991128) DBG | Getting to WaitForSSH function...
	I0229 18:57:25.223450   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:25.223775   47608 main.go:141] libmachine: (embed-certs-991128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:76:e2", ip: ""} in network mk-embed-certs-991128: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:29 +0000 UTC Type:0 Mac:52:54:00:44:76:e2 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:embed-certs-991128 Clientid:01:52:54:00:44:76:e2}
	I0229 18:57:25.223809   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined IP address 192.168.61.34 and MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:25.223951   47608 main.go:141] libmachine: (embed-certs-991128) DBG | Using SSH client type: external
	I0229 18:57:25.223981   47608 main.go:141] libmachine: (embed-certs-991128) DBG | Using SSH private key: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/embed-certs-991128/id_rsa (-rw-------)
	I0229 18:57:25.224014   47608 main.go:141] libmachine: (embed-certs-991128) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.34 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18259-6428/.minikube/machines/embed-certs-991128/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 18:57:25.224032   47608 main.go:141] libmachine: (embed-certs-991128) DBG | About to run SSH command:
	I0229 18:57:25.224052   47608 main.go:141] libmachine: (embed-certs-991128) DBG | exit 0
	I0229 18:57:26.464131   47919 start.go:369] acquired machines lock for "old-k8s-version-631080" in 4m11.42071391s
	I0229 18:57:26.464193   47919 start.go:96] Skipping create...Using existing machine configuration
	I0229 18:57:26.464200   47919 fix.go:54] fixHost starting: 
	I0229 18:57:26.464621   47919 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 18:57:26.464657   47919 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:57:26.480155   47919 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46439
	I0229 18:57:26.480488   47919 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:57:26.481000   47919 main.go:141] libmachine: Using API Version  1
	I0229 18:57:26.481027   47919 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:57:26.481327   47919 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:57:26.481514   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .DriverName
	I0229 18:57:26.481669   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetState
	I0229 18:57:26.482869   47919 fix.go:102] recreateIfNeeded on old-k8s-version-631080: state=Stopped err=<nil>
	I0229 18:57:26.482885   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .DriverName
	W0229 18:57:26.483052   47919 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 18:57:26.485421   47919 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-631080" ...
	I0229 18:57:25.351081   47608 main.go:141] libmachine: (embed-certs-991128) DBG | SSH cmd err, output: <nil>: 
	I0229 18:57:25.351434   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetConfigRaw
	I0229 18:57:25.352022   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetIP
	I0229 18:57:25.354349   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:25.354705   47608 main.go:141] libmachine: (embed-certs-991128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:76:e2", ip: ""} in network mk-embed-certs-991128: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:29 +0000 UTC Type:0 Mac:52:54:00:44:76:e2 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:embed-certs-991128 Clientid:01:52:54:00:44:76:e2}
	I0229 18:57:25.354734   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined IP address 192.168.61.34 and MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:25.354944   47608 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/embed-certs-991128/config.json ...
	I0229 18:57:25.355150   47608 machine.go:88] provisioning docker machine ...
	I0229 18:57:25.355169   47608 main.go:141] libmachine: (embed-certs-991128) Calling .DriverName
	I0229 18:57:25.355351   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetMachineName
	I0229 18:57:25.355501   47608 buildroot.go:166] provisioning hostname "embed-certs-991128"
	I0229 18:57:25.355528   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetMachineName
	I0229 18:57:25.355763   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHHostname
	I0229 18:57:25.357784   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:25.358109   47608 main.go:141] libmachine: (embed-certs-991128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:76:e2", ip: ""} in network mk-embed-certs-991128: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:29 +0000 UTC Type:0 Mac:52:54:00:44:76:e2 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:embed-certs-991128 Clientid:01:52:54:00:44:76:e2}
	I0229 18:57:25.358134   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined IP address 192.168.61.34 and MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:25.358265   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHPort
	I0229 18:57:25.358429   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHKeyPath
	I0229 18:57:25.358567   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHKeyPath
	I0229 18:57:25.358683   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHUsername
	I0229 18:57:25.358840   47608 main.go:141] libmachine: Using SSH client type: native
	I0229 18:57:25.359062   47608 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.34 22 <nil> <nil>}
	I0229 18:57:25.359078   47608 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-991128 && echo "embed-certs-991128" | sudo tee /etc/hostname
	I0229 18:57:25.487161   47608 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-991128
	
	I0229 18:57:25.487197   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHHostname
	I0229 18:57:25.489979   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:25.490275   47608 main.go:141] libmachine: (embed-certs-991128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:76:e2", ip: ""} in network mk-embed-certs-991128: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:29 +0000 UTC Type:0 Mac:52:54:00:44:76:e2 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:embed-certs-991128 Clientid:01:52:54:00:44:76:e2}
	I0229 18:57:25.490308   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined IP address 192.168.61.34 and MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:25.490539   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHPort
	I0229 18:57:25.490755   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHKeyPath
	I0229 18:57:25.490908   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHKeyPath
	I0229 18:57:25.491047   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHUsername
	I0229 18:57:25.491191   47608 main.go:141] libmachine: Using SSH client type: native
	I0229 18:57:25.491377   47608 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.34 22 <nil> <nil>}
	I0229 18:57:25.491405   47608 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-991128' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-991128/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-991128' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 18:57:25.617911   47608 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 18:57:25.617941   47608 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18259-6428/.minikube CaCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18259-6428/.minikube}
	I0229 18:57:25.617961   47608 buildroot.go:174] setting up certificates
	I0229 18:57:25.617971   47608 provision.go:83] configureAuth start
	I0229 18:57:25.617980   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetMachineName
	I0229 18:57:25.618235   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetIP
	I0229 18:57:25.620943   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:25.621286   47608 main.go:141] libmachine: (embed-certs-991128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:76:e2", ip: ""} in network mk-embed-certs-991128: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:29 +0000 UTC Type:0 Mac:52:54:00:44:76:e2 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:embed-certs-991128 Clientid:01:52:54:00:44:76:e2}
	I0229 18:57:25.621318   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined IP address 192.168.61.34 and MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:25.621460   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHHostname
	I0229 18:57:25.623629   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:25.623936   47608 main.go:141] libmachine: (embed-certs-991128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:76:e2", ip: ""} in network mk-embed-certs-991128: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:29 +0000 UTC Type:0 Mac:52:54:00:44:76:e2 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:embed-certs-991128 Clientid:01:52:54:00:44:76:e2}
	I0229 18:57:25.623961   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined IP address 192.168.61.34 and MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:25.624074   47608 provision.go:138] copyHostCerts
	I0229 18:57:25.624133   47608 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem, removing ...
	I0229 18:57:25.624154   47608 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem
	I0229 18:57:25.624240   47608 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem (1082 bytes)
	I0229 18:57:25.624344   47608 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem, removing ...
	I0229 18:57:25.624355   47608 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem
	I0229 18:57:25.624383   47608 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem (1123 bytes)
	I0229 18:57:25.624455   47608 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem, removing ...
	I0229 18:57:25.624462   47608 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem
	I0229 18:57:25.624483   47608 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem (1675 bytes)
	I0229 18:57:25.624538   47608 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem org=jenkins.embed-certs-991128 san=[192.168.61.34 192.168.61.34 localhost 127.0.0.1 minikube embed-certs-991128]
	I0229 18:57:25.757225   47608 provision.go:172] copyRemoteCerts
	I0229 18:57:25.757278   47608 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 18:57:25.757301   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHHostname
	I0229 18:57:25.759794   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:25.760098   47608 main.go:141] libmachine: (embed-certs-991128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:76:e2", ip: ""} in network mk-embed-certs-991128: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:29 +0000 UTC Type:0 Mac:52:54:00:44:76:e2 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:embed-certs-991128 Clientid:01:52:54:00:44:76:e2}
	I0229 18:57:25.760125   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined IP address 192.168.61.34 and MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:25.760287   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHPort
	I0229 18:57:25.760488   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHKeyPath
	I0229 18:57:25.760664   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHUsername
	I0229 18:57:25.760798   47608 sshutil.go:53] new ssh client: &{IP:192.168.61.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/embed-certs-991128/id_rsa Username:docker}
	I0229 18:57:25.849527   47608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0229 18:57:25.875673   47608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 18:57:25.902046   47608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0229 18:57:25.927830   47608 provision.go:86] duration metric: configureAuth took 309.850774ms
	I0229 18:57:25.927862   47608 buildroot.go:189] setting minikube options for container-runtime
	I0229 18:57:25.928081   47608 config.go:182] Loaded profile config "embed-certs-991128": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 18:57:25.928163   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHHostname
	I0229 18:57:25.930565   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:25.930917   47608 main.go:141] libmachine: (embed-certs-991128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:76:e2", ip: ""} in network mk-embed-certs-991128: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:29 +0000 UTC Type:0 Mac:52:54:00:44:76:e2 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:embed-certs-991128 Clientid:01:52:54:00:44:76:e2}
	I0229 18:57:25.930945   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined IP address 192.168.61.34 and MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:25.931135   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHPort
	I0229 18:57:25.931336   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHKeyPath
	I0229 18:57:25.931493   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHKeyPath
	I0229 18:57:25.931649   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHUsername
	I0229 18:57:25.931806   47608 main.go:141] libmachine: Using SSH client type: native
	I0229 18:57:25.932003   47608 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.34 22 <nil> <nil>}
	I0229 18:57:25.932026   47608 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0229 18:57:26.205080   47608 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0229 18:57:26.205139   47608 machine.go:91] provisioned docker machine in 849.974413ms
	I0229 18:57:26.205154   47608 start.go:300] post-start starting for "embed-certs-991128" (driver="kvm2")
	I0229 18:57:26.205168   47608 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 18:57:26.205191   47608 main.go:141] libmachine: (embed-certs-991128) Calling .DriverName
	I0229 18:57:26.205537   47608 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 18:57:26.205568   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHHostname
	I0229 18:57:26.208107   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:26.208417   47608 main.go:141] libmachine: (embed-certs-991128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:76:e2", ip: ""} in network mk-embed-certs-991128: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:29 +0000 UTC Type:0 Mac:52:54:00:44:76:e2 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:embed-certs-991128 Clientid:01:52:54:00:44:76:e2}
	I0229 18:57:26.208443   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined IP address 192.168.61.34 and MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:26.208625   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHPort
	I0229 18:57:26.208804   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHKeyPath
	I0229 18:57:26.208975   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHUsername
	I0229 18:57:26.209084   47608 sshutil.go:53] new ssh client: &{IP:192.168.61.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/embed-certs-991128/id_rsa Username:docker}
	I0229 18:57:26.303090   47608 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 18:57:26.309522   47608 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 18:57:26.309543   47608 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6428/.minikube/addons for local assets ...
	I0229 18:57:26.309609   47608 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6428/.minikube/files for local assets ...
	I0229 18:57:26.309697   47608 filesync.go:149] local asset: /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem -> 136512.pem in /etc/ssl/certs
	I0229 18:57:26.309800   47608 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 18:57:26.319897   47608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem --> /etc/ssl/certs/136512.pem (1708 bytes)
	I0229 18:57:26.346220   47608 start.go:303] post-start completed in 141.055399ms
	I0229 18:57:26.346242   47608 fix.go:56] fixHost completed within 20.955110287s
	I0229 18:57:26.346265   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHHostname
	I0229 18:57:26.348878   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:26.349237   47608 main.go:141] libmachine: (embed-certs-991128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:76:e2", ip: ""} in network mk-embed-certs-991128: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:29 +0000 UTC Type:0 Mac:52:54:00:44:76:e2 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:embed-certs-991128 Clientid:01:52:54:00:44:76:e2}
	I0229 18:57:26.349278   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined IP address 192.168.61.34 and MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:26.349415   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHPort
	I0229 18:57:26.349591   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHKeyPath
	I0229 18:57:26.349742   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHKeyPath
	I0229 18:57:26.349860   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHUsername
	I0229 18:57:26.350032   47608 main.go:141] libmachine: Using SSH client type: native
	I0229 18:57:26.350224   47608 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.34 22 <nil> <nil>}
	I0229 18:57:26.350235   47608 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 18:57:26.463992   47608 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709233046.436502673
	
	I0229 18:57:26.464017   47608 fix.go:206] guest clock: 1709233046.436502673
	I0229 18:57:26.464027   47608 fix.go:219] Guest: 2024-02-29 18:57:26.436502673 +0000 UTC Remote: 2024-02-29 18:57:26.346246091 +0000 UTC m=+291.120011459 (delta=90.256582ms)
	I0229 18:57:26.464055   47608 fix.go:190] guest clock delta is within tolerance: 90.256582ms
	I0229 18:57:26.464062   47608 start.go:83] releasing machines lock for "embed-certs-991128", held for 21.072955529s
	I0229 18:57:26.464099   47608 main.go:141] libmachine: (embed-certs-991128) Calling .DriverName
	I0229 18:57:26.464362   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetIP
	I0229 18:57:26.466954   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:26.467308   47608 main.go:141] libmachine: (embed-certs-991128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:76:e2", ip: ""} in network mk-embed-certs-991128: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:29 +0000 UTC Type:0 Mac:52:54:00:44:76:e2 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:embed-certs-991128 Clientid:01:52:54:00:44:76:e2}
	I0229 18:57:26.467350   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined IP address 192.168.61.34 and MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:26.467452   47608 main.go:141] libmachine: (embed-certs-991128) Calling .DriverName
	I0229 18:57:26.468058   47608 main.go:141] libmachine: (embed-certs-991128) Calling .DriverName
	I0229 18:57:26.468227   47608 main.go:141] libmachine: (embed-certs-991128) Calling .DriverName
	I0229 18:57:26.468287   47608 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 18:57:26.468356   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHHostname
	I0229 18:57:26.468456   47608 ssh_runner.go:195] Run: cat /version.json
	I0229 18:57:26.468477   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHHostname
	I0229 18:57:26.470917   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:26.470996   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:26.471291   47608 main.go:141] libmachine: (embed-certs-991128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:76:e2", ip: ""} in network mk-embed-certs-991128: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:29 +0000 UTC Type:0 Mac:52:54:00:44:76:e2 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:embed-certs-991128 Clientid:01:52:54:00:44:76:e2}
	I0229 18:57:26.471322   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined IP address 192.168.61.34 and MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:26.471352   47608 main.go:141] libmachine: (embed-certs-991128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:76:e2", ip: ""} in network mk-embed-certs-991128: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:29 +0000 UTC Type:0 Mac:52:54:00:44:76:e2 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:embed-certs-991128 Clientid:01:52:54:00:44:76:e2}
	I0229 18:57:26.471369   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined IP address 192.168.61.34 and MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:26.471562   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHPort
	I0229 18:57:26.471602   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHPort
	I0229 18:57:26.471719   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHKeyPath
	I0229 18:57:26.471783   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHKeyPath
	I0229 18:57:26.471873   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHUsername
	I0229 18:57:26.471940   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHUsername
	I0229 18:57:26.472005   47608 sshutil.go:53] new ssh client: &{IP:192.168.61.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/embed-certs-991128/id_rsa Username:docker}
	I0229 18:57:26.472095   47608 sshutil.go:53] new ssh client: &{IP:192.168.61.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/embed-certs-991128/id_rsa Username:docker}
	I0229 18:57:26.560629   47608 ssh_runner.go:195] Run: systemctl --version
	I0229 18:57:26.587852   47608 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0229 18:57:26.752819   47608 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 18:57:26.760557   47608 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 18:57:26.760629   47608 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 18:57:26.778065   47608 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 18:57:26.778096   47608 start.go:475] detecting cgroup driver to use...
	I0229 18:57:26.778156   47608 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 18:57:26.795970   47608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 18:57:26.810591   47608 docker.go:217] disabling cri-docker service (if available) ...
	I0229 18:57:26.810634   47608 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 18:57:26.826715   47608 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 18:57:26.840879   47608 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 18:57:26.959536   47608 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 18:57:27.143802   47608 docker.go:233] disabling docker service ...
	I0229 18:57:27.143856   47608 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 18:57:27.164748   47608 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 18:57:27.183161   47608 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 18:57:27.322659   47608 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 18:57:27.471650   47608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 18:57:27.489290   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 18:57:27.512706   47608 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0229 18:57:27.512770   47608 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:57:27.524596   47608 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0229 18:57:27.524657   47608 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:57:27.536202   47608 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:57:27.547343   47608 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:57:27.558390   47608 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 18:57:27.571297   47608 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 18:57:27.580859   47608 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 18:57:27.580903   47608 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0229 18:57:27.595324   47608 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 18:57:27.606130   47608 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:57:27.736363   47608 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0229 18:57:27.877719   47608 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0229 18:57:27.877804   47608 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0229 18:57:27.882920   47608 start.go:543] Will wait 60s for crictl version
	I0229 18:57:27.883035   47608 ssh_runner.go:195] Run: which crictl
	I0229 18:57:27.887132   47608 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 18:57:27.925964   47608 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0229 18:57:27.926061   47608 ssh_runner.go:195] Run: crio --version
	I0229 18:57:27.958046   47608 ssh_runner.go:195] Run: crio --version
	I0229 18:57:27.991575   47608 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0229 18:57:26.486586   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .Start
	I0229 18:57:26.486734   47919 main.go:141] libmachine: (old-k8s-version-631080) Ensuring networks are active...
	I0229 18:57:26.487377   47919 main.go:141] libmachine: (old-k8s-version-631080) Ensuring network default is active
	I0229 18:57:26.487679   47919 main.go:141] libmachine: (old-k8s-version-631080) Ensuring network mk-old-k8s-version-631080 is active
	I0229 18:57:26.488006   47919 main.go:141] libmachine: (old-k8s-version-631080) Getting domain xml...
	I0229 18:57:26.488624   47919 main.go:141] libmachine: (old-k8s-version-631080) Creating domain...
	I0229 18:57:27.689480   47919 main.go:141] libmachine: (old-k8s-version-631080) Waiting to get IP...
	I0229 18:57:27.690414   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:27.690858   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:57:27.690932   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:57:27.690848   48724 retry.go:31] will retry after 309.860592ms: waiting for machine to come up
	I0229 18:57:28.002437   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:28.002926   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:57:28.002959   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:57:28.002884   48724 retry.go:31] will retry after 298.018759ms: waiting for machine to come up
	I0229 18:57:28.302325   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:28.302849   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:57:28.302879   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:57:28.302801   48724 retry.go:31] will retry after 312.821928ms: waiting for machine to come up
	I0229 18:57:28.617315   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:28.617797   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:57:28.617831   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:57:28.617753   48724 retry.go:31] will retry after 373.960028ms: waiting for machine to come up
	I0229 18:57:28.993230   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:28.993860   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:57:28.993881   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:57:28.993809   48724 retry.go:31] will retry after 516.423282ms: waiting for machine to come up
	I0229 18:57:29.512208   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:29.512683   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:57:29.512718   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:57:29.512651   48724 retry.go:31] will retry after 776.839747ms: waiting for machine to come up
	I0229 18:57:27.992835   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetIP
	I0229 18:57:27.995847   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:27.996225   47608 main.go:141] libmachine: (embed-certs-991128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:76:e2", ip: ""} in network mk-embed-certs-991128: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:29 +0000 UTC Type:0 Mac:52:54:00:44:76:e2 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:embed-certs-991128 Clientid:01:52:54:00:44:76:e2}
	I0229 18:57:27.996255   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined IP address 192.168.61.34 and MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:27.996483   47608 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0229 18:57:28.001148   47608 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:57:28.016232   47608 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0229 18:57:28.016293   47608 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 18:57:28.055181   47608 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0229 18:57:28.055248   47608 ssh_runner.go:195] Run: which lz4
	I0229 18:57:28.059680   47608 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0229 18:57:28.064299   47608 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 18:57:28.064330   47608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0229 18:57:29.988576   47608 crio.go:444] Took 1.928948 seconds to copy over tarball
	I0229 18:57:29.988670   47608 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 18:57:30.290748   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:30.291228   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:57:30.291276   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:57:30.291195   48724 retry.go:31] will retry after 846.002471ms: waiting for machine to come up
	I0229 18:57:31.139734   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:31.140157   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:57:31.140177   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:57:31.140114   48724 retry.go:31] will retry after 1.01688411s: waiting for machine to come up
	I0229 18:57:32.158306   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:32.158845   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:57:32.158868   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:57:32.158827   48724 retry.go:31] will retry after 1.217119434s: waiting for machine to come up
	I0229 18:57:33.377121   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:33.377508   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:57:33.377538   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:57:33.377475   48724 retry.go:31] will retry after 1.566910779s: waiting for machine to come up
	I0229 18:57:32.844311   47608 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.855608287s)
	I0229 18:57:32.844344   47608 crio.go:451] Took 2.855747 seconds to extract the tarball
	I0229 18:57:32.844356   47608 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 18:57:32.890199   47608 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 18:57:32.953328   47608 crio.go:496] all images are preloaded for cri-o runtime.
	I0229 18:57:32.953351   47608 cache_images.go:84] Images are preloaded, skipping loading
	I0229 18:57:32.953408   47608 ssh_runner.go:195] Run: crio config
	I0229 18:57:33.006678   47608 cni.go:84] Creating CNI manager for ""
	I0229 18:57:33.006701   47608 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 18:57:33.006717   47608 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 18:57:33.006734   47608 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.34 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-991128 NodeName:embed-certs-991128 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.34"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.34 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 18:57:33.006872   47608 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.34
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-991128"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.34
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.34"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 18:57:33.006951   47608 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-991128 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.34
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-991128 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 18:57:33.006998   47608 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0229 18:57:33.018746   47608 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 18:57:33.018824   47608 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 18:57:33.029994   47608 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I0229 18:57:33.050522   47608 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 18:57:33.070313   47608 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I0229 18:57:33.091436   47608 ssh_runner.go:195] Run: grep 192.168.61.34	control-plane.minikube.internal$ /etc/hosts
	I0229 18:57:33.096253   47608 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.34	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:57:33.110683   47608 certs.go:56] Setting up /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/embed-certs-991128 for IP: 192.168.61.34
	I0229 18:57:33.110720   47608 certs.go:190] acquiring lock for shared ca certs: {Name:mk3505d2468da66ac20dc3ebf913782cecf1ba0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:57:33.110892   47608 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18259-6428/.minikube/ca.key
	I0229 18:57:33.110957   47608 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.key
	I0229 18:57:33.111075   47608 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/embed-certs-991128/client.key
	I0229 18:57:33.111147   47608 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/embed-certs-991128/apiserver.key.d8cf1313
	I0229 18:57:33.111195   47608 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/embed-certs-991128/proxy-client.key
	I0229 18:57:33.111320   47608 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651.pem (1338 bytes)
	W0229 18:57:33.111352   47608 certs.go:433] ignoring /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651_empty.pem, impossibly tiny 0 bytes
	I0229 18:57:33.111362   47608 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem (1675 bytes)
	I0229 18:57:33.111383   47608 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem (1082 bytes)
	I0229 18:57:33.111406   47608 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem (1123 bytes)
	I0229 18:57:33.111443   47608 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem (1675 bytes)
	I0229 18:57:33.111479   47608 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem (1708 bytes)
	I0229 18:57:33.112071   47608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/embed-certs-991128/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 18:57:33.143498   47608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/embed-certs-991128/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0229 18:57:33.171567   47608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/embed-certs-991128/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 18:57:33.199300   47608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/embed-certs-991128/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0229 18:57:33.226492   47608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 18:57:33.254025   47608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0229 18:57:33.281215   47608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 18:57:33.311188   47608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0229 18:57:33.342138   47608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 18:57:33.373884   47608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651.pem --> /usr/share/ca-certificates/13651.pem (1338 bytes)
	I0229 18:57:33.401130   47608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem --> /usr/share/ca-certificates/136512.pem (1708 bytes)
	I0229 18:57:33.427527   47608 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 18:57:33.446246   47608 ssh_runner.go:195] Run: openssl version
	I0229 18:57:33.455476   47608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13651.pem && ln -fs /usr/share/ca-certificates/13651.pem /etc/ssl/certs/13651.pem"
	I0229 18:57:33.473394   47608 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13651.pem
	I0229 18:57:33.478904   47608 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 17:49 /usr/share/ca-certificates/13651.pem
	I0229 18:57:33.478961   47608 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13651.pem
	I0229 18:57:33.485913   47608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13651.pem /etc/ssl/certs/51391683.0"
	I0229 18:57:33.499458   47608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136512.pem && ln -fs /usr/share/ca-certificates/136512.pem /etc/ssl/certs/136512.pem"
	I0229 18:57:33.512861   47608 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136512.pem
	I0229 18:57:33.518749   47608 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 17:49 /usr/share/ca-certificates/136512.pem
	I0229 18:57:33.518808   47608 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136512.pem
	I0229 18:57:33.525612   47608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136512.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 18:57:33.539397   47608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 18:57:33.552302   47608 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:57:33.557481   47608 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 17:40 /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:57:33.557543   47608 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:57:33.564226   47608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 18:57:33.577315   47608 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 18:57:33.582527   47608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0229 18:57:33.589246   47608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0229 18:57:33.595992   47608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0229 18:57:33.602535   47608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0229 18:57:33.609231   47608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0229 18:57:33.616292   47608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0229 18:57:33.623124   47608 kubeadm.go:404] StartCluster: {Name:embed-certs-991128 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-991128 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.34 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:57:33.623239   47608 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0229 18:57:33.623281   47608 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 18:57:33.663871   47608 cri.go:89] found id: ""
	I0229 18:57:33.663948   47608 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 18:57:33.676484   47608 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0229 18:57:33.676519   47608 kubeadm.go:636] restartCluster start
	I0229 18:57:33.676576   47608 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0229 18:57:33.690000   47608 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:33.690903   47608 kubeconfig.go:92] found "embed-certs-991128" server: "https://192.168.61.34:8443"
	I0229 18:57:33.692909   47608 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0229 18:57:33.706062   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:33.706162   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:33.722166   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:34.206285   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:34.206371   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:34.222736   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:34.706286   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:34.706415   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:34.721170   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:35.206815   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:35.206905   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:35.223777   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:34.946027   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:35.171546   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:57:35.171576   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:57:34.946337   48724 retry.go:31] will retry after 2.169140366s: waiting for machine to come up
	I0229 18:57:37.117080   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:37.117531   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:57:37.117564   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:57:37.117491   48724 retry.go:31] will retry after 2.187461538s: waiting for machine to come up
	I0229 18:57:39.307825   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:39.308159   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:57:39.308199   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:57:39.308131   48724 retry.go:31] will retry after 4.480150028s: waiting for machine to come up
	I0229 18:57:35.706239   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:35.706327   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:35.727095   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:36.206608   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:36.206718   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:36.220509   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:36.707149   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:36.707237   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:36.725852   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:37.206401   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:37.206530   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:37.225323   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:37.706920   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:37.707051   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:37.725340   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:38.207012   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:38.207113   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:38.225343   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:38.706906   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:38.706988   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:38.720820   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:39.206324   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:39.206399   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:39.220757   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:39.706274   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:39.706361   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:39.719994   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:40.206511   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:40.206589   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:40.219998   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:43.790597   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:43.791050   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:57:43.791076   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:57:43.790999   48724 retry.go:31] will retry after 3.830907426s: waiting for machine to come up
	I0229 18:57:40.706115   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:40.706262   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:40.719892   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:41.206440   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:41.206518   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:41.220057   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:41.706585   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:41.706677   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:41.720355   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:42.206977   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:42.207107   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:42.220629   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:42.706185   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:42.706266   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:42.720230   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:43.206832   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:43.206901   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:43.221019   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:43.706611   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:43.706693   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:43.720457   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:43.720489   47608 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0229 18:57:43.720501   47608 kubeadm.go:1135] stopping kube-system containers ...
	I0229 18:57:43.720515   47608 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0229 18:57:43.720572   47608 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 18:57:43.757509   47608 cri.go:89] found id: ""
	I0229 18:57:43.757592   47608 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0229 18:57:43.777950   47608 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 18:57:43.788404   47608 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 18:57:43.788455   47608 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 18:57:43.799322   47608 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0229 18:57:43.799340   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:57:43.907052   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:57:44.731907   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:57:44.940317   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:57:45.040382   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:57:45.113335   47608 api_server.go:52] waiting for apiserver process to appear ...
	I0229 18:57:45.113418   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:57:48.808893   48088 start.go:369] acquired machines lock for "default-k8s-diff-port-153528" in 4m9.434383703s
	I0229 18:57:48.808960   48088 start.go:96] Skipping create...Using existing machine configuration
	I0229 18:57:48.808973   48088 fix.go:54] fixHost starting: 
	I0229 18:57:48.809402   48088 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 18:57:48.809445   48088 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:57:48.829022   48088 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41617
	I0229 18:57:48.829448   48088 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:57:48.830097   48088 main.go:141] libmachine: Using API Version  1
	I0229 18:57:48.830129   48088 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:57:48.830547   48088 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:57:48.830766   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .DriverName
	I0229 18:57:48.830918   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetState
	I0229 18:57:48.832707   48088 fix.go:102] recreateIfNeeded on default-k8s-diff-port-153528: state=Stopped err=<nil>
	I0229 18:57:48.832733   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .DriverName
	W0229 18:57:48.832879   48088 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 18:57:48.834969   48088 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-153528" ...
	I0229 18:57:48.836190   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .Start
	I0229 18:57:48.836352   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Ensuring networks are active...
	I0229 18:57:48.837051   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Ensuring network default is active
	I0229 18:57:48.837440   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Ensuring network mk-default-k8s-diff-port-153528 is active
	I0229 18:57:48.837886   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Getting domain xml...
	I0229 18:57:48.838747   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Creating domain...
	I0229 18:57:47.623408   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:47.623861   47919 main.go:141] libmachine: (old-k8s-version-631080) Found IP for machine: 192.168.83.214
	I0229 18:57:47.623891   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has current primary IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:47.623900   47919 main.go:141] libmachine: (old-k8s-version-631080) Reserving static IP address...
	I0229 18:57:47.624340   47919 main.go:141] libmachine: (old-k8s-version-631080) Reserved static IP address: 192.168.83.214
	I0229 18:57:47.624374   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "old-k8s-version-631080", mac: "52:54:00:1b:b2:7e", ip: "192.168.83.214"} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:57:38 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:57:47.624390   47919 main.go:141] libmachine: (old-k8s-version-631080) Waiting for SSH to be available...
	I0229 18:57:47.624419   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | skip adding static IP to network mk-old-k8s-version-631080 - found existing host DHCP lease matching {name: "old-k8s-version-631080", mac: "52:54:00:1b:b2:7e", ip: "192.168.83.214"}
	I0229 18:57:47.624440   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | Getting to WaitForSSH function...
	I0229 18:57:47.626600   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:47.626881   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:57:38 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:57:47.626904   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:47.627042   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | Using SSH client type: external
	I0229 18:57:47.627070   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | Using SSH private key: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/old-k8s-version-631080/id_rsa (-rw-------)
	I0229 18:57:47.627106   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.214 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18259-6428/.minikube/machines/old-k8s-version-631080/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 18:57:47.627127   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | About to run SSH command:
	I0229 18:57:47.627146   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | exit 0
	I0229 18:57:47.751206   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | SSH cmd err, output: <nil>: 
	I0229 18:57:47.751569   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetConfigRaw
	I0229 18:57:47.752158   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetIP
	I0229 18:57:47.754701   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:47.755064   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:57:38 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:57:47.755089   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:47.755331   47919 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/config.json ...
	I0229 18:57:47.755551   47919 machine.go:88] provisioning docker machine ...
	I0229 18:57:47.755569   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .DriverName
	I0229 18:57:47.755772   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetMachineName
	I0229 18:57:47.755961   47919 buildroot.go:166] provisioning hostname "old-k8s-version-631080"
	I0229 18:57:47.755979   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetMachineName
	I0229 18:57:47.756102   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHHostname
	I0229 18:57:47.758421   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:47.758767   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:57:38 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:57:47.758796   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:47.758895   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHPort
	I0229 18:57:47.759065   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:57:47.759233   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:57:47.759387   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHUsername
	I0229 18:57:47.759548   47919 main.go:141] libmachine: Using SSH client type: native
	I0229 18:57:47.759718   47919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.83.214 22 <nil> <nil>}
	I0229 18:57:47.759730   47919 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-631080 && echo "old-k8s-version-631080" | sudo tee /etc/hostname
	I0229 18:57:47.879204   47919 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-631080
	
	I0229 18:57:47.879233   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHHostname
	I0229 18:57:47.881915   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:47.882207   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:57:38 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:57:47.882237   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:47.882407   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHPort
	I0229 18:57:47.882582   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:57:47.882737   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:57:47.882880   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHUsername
	I0229 18:57:47.883053   47919 main.go:141] libmachine: Using SSH client type: native
	I0229 18:57:47.883244   47919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.83.214 22 <nil> <nil>}
	I0229 18:57:47.883262   47919 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-631080' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-631080/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-631080' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 18:57:47.996920   47919 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 18:57:47.996948   47919 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18259-6428/.minikube CaCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18259-6428/.minikube}
	I0229 18:57:47.996964   47919 buildroot.go:174] setting up certificates
	I0229 18:57:47.996972   47919 provision.go:83] configureAuth start
	I0229 18:57:47.996980   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetMachineName
	I0229 18:57:47.997262   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetIP
	I0229 18:57:47.999702   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.000044   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:57:38 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:57:48.000076   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.000207   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHHostname
	I0229 18:57:48.002169   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.002457   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:57:38 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:57:48.002479   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.002552   47919 provision.go:138] copyHostCerts
	I0229 18:57:48.002600   47919 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem, removing ...
	I0229 18:57:48.002623   47919 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem
	I0229 18:57:48.002690   47919 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem (1082 bytes)
	I0229 18:57:48.002805   47919 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem, removing ...
	I0229 18:57:48.002820   47919 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem
	I0229 18:57:48.002854   47919 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem (1123 bytes)
	I0229 18:57:48.002936   47919 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem, removing ...
	I0229 18:57:48.002946   47919 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem
	I0229 18:57:48.002965   47919 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem (1675 bytes)
	I0229 18:57:48.003030   47919 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-631080 san=[192.168.83.214 192.168.83.214 localhost 127.0.0.1 minikube old-k8s-version-631080]
	I0229 18:57:48.095543   47919 provision.go:172] copyRemoteCerts
	I0229 18:57:48.095594   47919 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 18:57:48.095617   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHHostname
	I0229 18:57:48.098167   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.098411   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:57:38 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:57:48.098439   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.098593   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHPort
	I0229 18:57:48.098770   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:57:48.098910   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHUsername
	I0229 18:57:48.099046   47919 sshutil.go:53] new ssh client: &{IP:192.168.83.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/old-k8s-version-631080/id_rsa Username:docker}
	I0229 18:57:48.178774   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 18:57:48.204896   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0229 18:57:48.234660   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 18:57:48.264189   47919 provision.go:86] duration metric: configureAuth took 267.20486ms
	I0229 18:57:48.264215   47919 buildroot.go:189] setting minikube options for container-runtime
	I0229 18:57:48.264391   47919 config.go:182] Loaded profile config "old-k8s-version-631080": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0229 18:57:48.264464   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHHostname
	I0229 18:57:48.267066   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.267464   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:57:38 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:57:48.267500   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.267721   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHPort
	I0229 18:57:48.267913   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:57:48.268105   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:57:48.268260   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHUsername
	I0229 18:57:48.268425   47919 main.go:141] libmachine: Using SSH client type: native
	I0229 18:57:48.268630   47919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.83.214 22 <nil> <nil>}
	I0229 18:57:48.268658   47919 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0229 18:57:48.560376   47919 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0229 18:57:48.560401   47919 machine.go:91] provisioned docker machine in 804.837627ms
	I0229 18:57:48.560414   47919 start.go:300] post-start starting for "old-k8s-version-631080" (driver="kvm2")
	I0229 18:57:48.560426   47919 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 18:57:48.560450   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .DriverName
	I0229 18:57:48.560751   47919 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 18:57:48.560776   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHHostname
	I0229 18:57:48.563312   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.563638   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:57:38 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:57:48.563670   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.563776   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHPort
	I0229 18:57:48.563971   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:57:48.564126   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHUsername
	I0229 18:57:48.564295   47919 sshutil.go:53] new ssh client: &{IP:192.168.83.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/old-k8s-version-631080/id_rsa Username:docker}
	I0229 18:57:48.646996   47919 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 18:57:48.652329   47919 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 18:57:48.652356   47919 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6428/.minikube/addons for local assets ...
	I0229 18:57:48.652428   47919 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6428/.minikube/files for local assets ...
	I0229 18:57:48.652538   47919 filesync.go:149] local asset: /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem -> 136512.pem in /etc/ssl/certs
	I0229 18:57:48.652665   47919 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 18:57:48.663432   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem --> /etc/ssl/certs/136512.pem (1708 bytes)
	I0229 18:57:48.694980   47919 start.go:303] post-start completed in 134.554808ms
	I0229 18:57:48.695000   47919 fix.go:56] fixHost completed within 22.230801566s
	I0229 18:57:48.695033   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHHostname
	I0229 18:57:48.697788   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.698205   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:57:38 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:57:48.698231   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.698416   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHPort
	I0229 18:57:48.698633   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:57:48.698797   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:57:48.698941   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHUsername
	I0229 18:57:48.699118   47919 main.go:141] libmachine: Using SSH client type: native
	I0229 18:57:48.699327   47919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.83.214 22 <nil> <nil>}
	I0229 18:57:48.699349   47919 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 18:57:48.808714   47919 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709233068.793225740
	
	I0229 18:57:48.808740   47919 fix.go:206] guest clock: 1709233068.793225740
	I0229 18:57:48.808751   47919 fix.go:219] Guest: 2024-02-29 18:57:48.79322574 +0000 UTC Remote: 2024-02-29 18:57:48.695003912 +0000 UTC m=+273.807414604 (delta=98.221828ms)
	I0229 18:57:48.808793   47919 fix.go:190] guest clock delta is within tolerance: 98.221828ms
	I0229 18:57:48.808800   47919 start.go:83] releasing machines lock for "old-k8s-version-631080", held for 22.344627122s
	I0229 18:57:48.808832   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .DriverName
	I0229 18:57:48.809114   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetIP
	I0229 18:57:48.811872   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.812297   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:57:38 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:57:48.812336   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.812522   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .DriverName
	I0229 18:57:48.813072   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .DriverName
	I0229 18:57:48.813270   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .DriverName
	I0229 18:57:48.813347   47919 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 18:57:48.813392   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHHostname
	I0229 18:57:48.813509   47919 ssh_runner.go:195] Run: cat /version.json
	I0229 18:57:48.813536   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHHostname
	I0229 18:57:48.816200   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.816580   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:57:38 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:57:48.816607   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.816676   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.816753   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHPort
	I0229 18:57:48.816939   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:57:48.817097   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHUsername
	I0229 18:57:48.817244   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:57:38 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:57:48.817268   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.817293   47919 sshutil.go:53] new ssh client: &{IP:192.168.83.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/old-k8s-version-631080/id_rsa Username:docker}
	I0229 18:57:48.817420   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHPort
	I0229 18:57:48.817538   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:57:48.817643   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHUsername
	I0229 18:57:48.817769   47919 sshutil.go:53] new ssh client: &{IP:192.168.83.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/old-k8s-version-631080/id_rsa Username:docker}
	I0229 18:57:48.919708   47919 ssh_runner.go:195] Run: systemctl --version
	I0229 18:57:48.926381   47919 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0229 18:57:49.086263   47919 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 18:57:49.093350   47919 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 18:57:49.093427   47919 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 18:57:49.112686   47919 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 18:57:49.112716   47919 start.go:475] detecting cgroup driver to use...
	I0229 18:57:49.112784   47919 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 18:57:49.135232   47919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 18:57:49.152937   47919 docker.go:217] disabling cri-docker service (if available) ...
	I0229 18:57:49.152992   47919 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 18:57:49.172048   47919 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 18:57:49.190450   47919 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 18:57:49.341605   47919 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 18:57:49.539663   47919 docker.go:233] disabling docker service ...
	I0229 18:57:49.539733   47919 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 18:57:49.562110   47919 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 18:57:49.578761   47919 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 18:57:49.739044   47919 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 18:57:49.897866   47919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 18:57:49.918783   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 18:57:45.613998   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:57:46.114525   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:57:46.146283   47608 api_server.go:72] duration metric: took 1.032950423s to wait for apiserver process to appear ...
	I0229 18:57:46.146327   47608 api_server.go:88] waiting for apiserver healthz status ...
	I0229 18:57:46.146344   47608 api_server.go:253] Checking apiserver healthz at https://192.168.61.34:8443/healthz ...
	I0229 18:57:46.146876   47608 api_server.go:269] stopped: https://192.168.61.34:8443/healthz: Get "https://192.168.61.34:8443/healthz": dial tcp 192.168.61.34:8443: connect: connection refused
	I0229 18:57:46.646633   47608 api_server.go:253] Checking apiserver healthz at https://192.168.61.34:8443/healthz ...
	I0229 18:57:49.751381   47608 api_server.go:279] https://192.168.61.34:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 18:57:49.751410   47608 api_server.go:103] status: https://192.168.61.34:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 18:57:49.751427   47608 api_server.go:253] Checking apiserver healthz at https://192.168.61.34:8443/healthz ...
	I0229 18:57:49.791602   47608 api_server.go:279] https://192.168.61.34:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 18:57:49.791634   47608 api_server.go:103] status: https://192.168.61.34:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 18:57:50.147094   47608 api_server.go:253] Checking apiserver healthz at https://192.168.61.34:8443/healthz ...
	I0229 18:57:50.153644   47608 api_server.go:279] https://192.168.61.34:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 18:57:50.153671   47608 api_server.go:103] status: https://192.168.61.34:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 18:57:49.941241   47919 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0229 18:57:49.941328   47919 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:57:49.953131   47919 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0229 18:57:49.953215   47919 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:57:49.964850   47919 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:57:49.976035   47919 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:57:49.988017   47919 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 18:57:50.000990   47919 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 18:57:50.019124   47919 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 18:57:50.019177   47919 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0229 18:57:50.042881   47919 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 18:57:50.054219   47919 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:57:50.213793   47919 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0229 18:57:50.387473   47919 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0229 18:57:50.387536   47919 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0229 18:57:50.395113   47919 start.go:543] Will wait 60s for crictl version
	I0229 18:57:50.395177   47919 ssh_runner.go:195] Run: which crictl
	I0229 18:57:50.400166   47919 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 18:57:50.446910   47919 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0229 18:57:50.447015   47919 ssh_runner.go:195] Run: crio --version
	I0229 18:57:50.486139   47919 ssh_runner.go:195] Run: crio --version
	I0229 18:57:50.528290   47919 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.29.1 ...
	I0229 18:57:50.646967   47608 api_server.go:253] Checking apiserver healthz at https://192.168.61.34:8443/healthz ...
	I0229 18:57:50.660388   47608 api_server.go:279] https://192.168.61.34:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 18:57:50.660420   47608 api_server.go:103] status: https://192.168.61.34:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 18:57:51.146674   47608 api_server.go:253] Checking apiserver healthz at https://192.168.61.34:8443/healthz ...
	I0229 18:57:51.155154   47608 api_server.go:279] https://192.168.61.34:8443/healthz returned 200:
	ok
	I0229 18:57:51.166220   47608 api_server.go:141] control plane version: v1.28.4
	I0229 18:57:51.166255   47608 api_server.go:131] duration metric: took 5.019919259s to wait for apiserver health ...
	I0229 18:57:51.166267   47608 cni.go:84] Creating CNI manager for ""
	I0229 18:57:51.166277   47608 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 18:57:51.168259   47608 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 18:57:50.148417   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting to get IP...
	I0229 18:57:50.149211   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:57:50.149601   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | unable to find current IP address of domain default-k8s-diff-port-153528 in network mk-default-k8s-diff-port-153528
	I0229 18:57:50.149661   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | I0229 18:57:50.149584   48864 retry.go:31] will retry after 287.925969ms: waiting for machine to come up
	I0229 18:57:50.439389   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:57:50.440003   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | unable to find current IP address of domain default-k8s-diff-port-153528 in network mk-default-k8s-diff-port-153528
	I0229 18:57:50.440033   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | I0229 18:57:50.439944   48864 retry.go:31] will retry after 341.540721ms: waiting for machine to come up
	I0229 18:57:50.783988   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:57:50.784594   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | unable to find current IP address of domain default-k8s-diff-port-153528 in network mk-default-k8s-diff-port-153528
	I0229 18:57:50.784622   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | I0229 18:57:50.784544   48864 retry.go:31] will retry after 344.053696ms: waiting for machine to come up
	I0229 18:57:51.130288   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:57:51.131126   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | unable to find current IP address of domain default-k8s-diff-port-153528 in network mk-default-k8s-diff-port-153528
	I0229 18:57:51.131152   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | I0229 18:57:51.131075   48864 retry.go:31] will retry after 593.843769ms: waiting for machine to come up
	I0229 18:57:51.726464   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:57:51.726974   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | unable to find current IP address of domain default-k8s-diff-port-153528 in network mk-default-k8s-diff-port-153528
	I0229 18:57:51.727000   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | I0229 18:57:51.726879   48864 retry.go:31] will retry after 689.199247ms: waiting for machine to come up
	I0229 18:57:52.418297   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:57:52.418801   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | unable to find current IP address of domain default-k8s-diff-port-153528 in network mk-default-k8s-diff-port-153528
	I0229 18:57:52.418829   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | I0229 18:57:52.418753   48864 retry.go:31] will retry after 737.671716ms: waiting for machine to come up
	I0229 18:57:53.158161   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:57:53.158573   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | unable to find current IP address of domain default-k8s-diff-port-153528 in network mk-default-k8s-diff-port-153528
	I0229 18:57:53.158618   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | I0229 18:57:53.158521   48864 retry.go:31] will retry after 1.18162067s: waiting for machine to come up
	I0229 18:57:50.530077   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetIP
	I0229 18:57:50.533389   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:50.533761   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:57:38 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:57:50.533794   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:50.534001   47919 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0229 18:57:50.538857   47919 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:57:50.556961   47919 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0229 18:57:50.557028   47919 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 18:57:50.616925   47919 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0229 18:57:50.617001   47919 ssh_runner.go:195] Run: which lz4
	I0229 18:57:50.622857   47919 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0229 18:57:50.628035   47919 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 18:57:50.628070   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0229 18:57:52.679575   47919 crio.go:444] Took 2.056751 seconds to copy over tarball
	I0229 18:57:52.679656   47919 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 18:57:51.169655   47608 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 18:57:51.184521   47608 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 18:57:51.215791   47608 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 18:57:51.235050   47608 system_pods.go:59] 8 kube-system pods found
	I0229 18:57:51.235136   47608 system_pods.go:61] "coredns-5dd5756b68-6b5pm" [d8023f3b-fc07-4dd4-98dc-bd321d137a06] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 18:57:51.235150   47608 system_pods.go:61] "etcd-embed-certs-991128" [01a1ee82-a650-4736-8fb9-e983427bef96] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0229 18:57:51.235161   47608 system_pods.go:61] "kube-apiserver-embed-certs-991128" [a6810e01-a958-4e7b-ba0f-6cd2e747b998] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0229 18:57:51.235170   47608 system_pods.go:61] "kube-controller-manager-embed-certs-991128" [6469e9c8-7372-4756-926d-0de600c8ed4f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0229 18:57:51.235179   47608 system_pods.go:61] "kube-proxy-zd7rf" [963b5fb6-f287-4c80-a324-b0cb09b1ae97] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0229 18:57:51.235190   47608 system_pods.go:61] "kube-scheduler-embed-certs-991128" [ac2e7c83-6e96-46ba-aeed-c847d312ba4c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0229 18:57:51.235199   47608 system_pods.go:61] "metrics-server-57f55c9bc5-5w6c9" [6ddb9b39-e1d1-4d34-bb45-e9a5c161f13d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 18:57:51.235220   47608 system_pods.go:61] "storage-provisioner" [99d0cbe5-bb8b-472b-be91-9f29442c8c1d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0229 18:57:51.235243   47608 system_pods.go:74] duration metric: took 19.430245ms to wait for pod list to return data ...
	I0229 18:57:51.235257   47608 node_conditions.go:102] verifying NodePressure condition ...
	I0229 18:57:51.241823   47608 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 18:57:51.241849   47608 node_conditions.go:123] node cpu capacity is 2
	I0229 18:57:51.241863   47608 node_conditions.go:105] duration metric: took 6.600606ms to run NodePressure ...
	I0229 18:57:51.241884   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:57:51.654038   47608 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0229 18:57:51.663120   47608 kubeadm.go:787] kubelet initialised
	I0229 18:57:51.663146   47608 kubeadm.go:788] duration metric: took 9.079921ms waiting for restarted kubelet to initialise ...
	I0229 18:57:51.663156   47608 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 18:57:51.671417   47608 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-6b5pm" in "kube-system" namespace to be "Ready" ...
	I0229 18:57:53.679921   47608 pod_ready.go:102] pod "coredns-5dd5756b68-6b5pm" in "kube-system" namespace has status "Ready":"False"
	I0229 18:57:54.342488   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:57:54.342981   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | unable to find current IP address of domain default-k8s-diff-port-153528 in network mk-default-k8s-diff-port-153528
	I0229 18:57:54.343006   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | I0229 18:57:54.342931   48864 retry.go:31] will retry after 1.180730966s: waiting for machine to come up
	I0229 18:57:55.524954   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:57:55.525398   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | unable to find current IP address of domain default-k8s-diff-port-153528 in network mk-default-k8s-diff-port-153528
	I0229 18:57:55.525427   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | I0229 18:57:55.525338   48864 retry.go:31] will retry after 1.706902899s: waiting for machine to come up
	I0229 18:57:57.233340   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:57:57.233834   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | unable to find current IP address of domain default-k8s-diff-port-153528 in network mk-default-k8s-diff-port-153528
	I0229 18:57:57.233862   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | I0229 18:57:57.233791   48864 retry.go:31] will retry after 2.281506267s: waiting for machine to come up
	I0229 18:57:55.661321   47919 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.981628592s)
	I0229 18:57:55.661351   47919 crio.go:451] Took 2.981744 seconds to extract the tarball
	I0229 18:57:55.661363   47919 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 18:57:55.708924   47919 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 18:57:55.751627   47919 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0229 18:57:55.751650   47919 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0229 18:57:55.751726   47919 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:57:55.751752   47919 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 18:57:55.751758   47919 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0229 18:57:55.751735   47919 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 18:57:55.751751   47919 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0229 18:57:55.751772   47919 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 18:57:55.751864   47919 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0229 18:57:55.752153   47919 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 18:57:55.753139   47919 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0229 18:57:55.753456   47919 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:57:55.753467   47919 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 18:57:55.753476   47919 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 18:57:55.753476   47919 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 18:57:55.753476   47919 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0229 18:57:55.753486   47919 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0229 18:57:55.753547   47919 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 18:57:55.934620   47919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0229 18:57:55.988723   47919 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0229 18:57:55.988767   47919 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0229 18:57:55.988811   47919 ssh_runner.go:195] Run: which crictl
	I0229 18:57:55.993750   47919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0229 18:57:56.036192   47919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0229 18:57:56.037872   47919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0229 18:57:56.038123   47919 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0229 18:57:56.040846   47919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0229 18:57:56.046242   47919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0229 18:57:56.065126   47919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 18:57:56.077683   47919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0229 18:57:56.126642   47919 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0229 18:57:56.126683   47919 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 18:57:56.126741   47919 ssh_runner.go:195] Run: which crictl
	I0229 18:57:56.191928   47919 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0229 18:57:56.191980   47919 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 18:57:56.192006   47919 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0229 18:57:56.192037   47919 ssh_runner.go:195] Run: which crictl
	I0229 18:57:56.192045   47919 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0229 18:57:56.192086   47919 ssh_runner.go:195] Run: which crictl
	I0229 18:57:56.203773   47919 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0229 18:57:56.203819   47919 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 18:57:56.203863   47919 ssh_runner.go:195] Run: which crictl
	I0229 18:57:56.227761   47919 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0229 18:57:56.227799   47919 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 18:57:56.227832   47919 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0229 18:57:56.227856   47919 ssh_runner.go:195] Run: which crictl
	I0229 18:57:56.227864   47919 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0229 18:57:56.227876   47919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0229 18:57:56.227922   47919 ssh_runner.go:195] Run: which crictl
	I0229 18:57:56.227925   47919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0229 18:57:56.227956   47919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0229 18:57:56.227961   47919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0229 18:57:56.246645   47919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0229 18:57:56.344012   47919 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0229 18:57:56.344125   47919 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0229 18:57:56.346352   47919 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0229 18:57:56.361309   47919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 18:57:56.361484   47919 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0229 18:57:56.383942   47919 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0229 18:57:56.411697   47919 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0229 18:57:56.649625   47919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:57:56.801430   47919 cache_images.go:92] LoadImages completed in 1.049765957s
	W0229 18:57:56.801578   47919 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0: no such file or directory
	I0229 18:57:56.801670   47919 ssh_runner.go:195] Run: crio config
	I0229 18:57:56.872210   47919 cni.go:84] Creating CNI manager for ""
	I0229 18:57:56.872238   47919 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 18:57:56.872260   47919 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 18:57:56.872283   47919 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.214 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-631080 NodeName:old-k8s-version-631080 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.214"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.214 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0229 18:57:56.872458   47919 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.214
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-631080"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.214
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.214"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-631080
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.83.214:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 18:57:56.872545   47919 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-631080 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.83.214
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-631080 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 18:57:56.872620   47919 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0229 18:57:56.884571   47919 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 18:57:56.884647   47919 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 18:57:56.896167   47919 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0229 18:57:56.916824   47919 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 18:57:56.938739   47919 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I0229 18:57:56.961411   47919 ssh_runner.go:195] Run: grep 192.168.83.214	control-plane.minikube.internal$ /etc/hosts
	I0229 18:57:56.966134   47919 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.214	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:57:56.981089   47919 certs.go:56] Setting up /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080 for IP: 192.168.83.214
	I0229 18:57:56.981121   47919 certs.go:190] acquiring lock for shared ca certs: {Name:mk3505d2468da66ac20dc3ebf913782cecf1ba0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:57:56.981295   47919 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18259-6428/.minikube/ca.key
	I0229 18:57:56.981358   47919 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.key
	I0229 18:57:56.981465   47919 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/client.key
	I0229 18:57:56.981533   47919 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/apiserver.key.89a58109
	I0229 18:57:56.981586   47919 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/proxy-client.key
	I0229 18:57:56.981755   47919 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651.pem (1338 bytes)
	W0229 18:57:56.981791   47919 certs.go:433] ignoring /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651_empty.pem, impossibly tiny 0 bytes
	I0229 18:57:56.981806   47919 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem (1675 bytes)
	I0229 18:57:56.981845   47919 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem (1082 bytes)
	I0229 18:57:56.981878   47919 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem (1123 bytes)
	I0229 18:57:56.981910   47919 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem (1675 bytes)
	I0229 18:57:56.981961   47919 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem (1708 bytes)
	I0229 18:57:56.982889   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 18:57:57.015587   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0229 18:57:57.048698   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 18:57:57.078634   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 18:57:57.114008   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 18:57:57.146884   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0229 18:57:57.179560   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 18:57:57.211989   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0229 18:57:57.246936   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651.pem --> /usr/share/ca-certificates/13651.pem (1338 bytes)
	I0229 18:57:57.280651   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem --> /usr/share/ca-certificates/136512.pem (1708 bytes)
	I0229 18:57:57.310050   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 18:57:57.337439   47919 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 18:57:57.359100   47919 ssh_runner.go:195] Run: openssl version
	I0229 18:57:57.366111   47919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13651.pem && ln -fs /usr/share/ca-certificates/13651.pem /etc/ssl/certs/13651.pem"
	I0229 18:57:57.380593   47919 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13651.pem
	I0229 18:57:57.386703   47919 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 17:49 /usr/share/ca-certificates/13651.pem
	I0229 18:57:57.386771   47919 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13651.pem
	I0229 18:57:57.401429   47919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13651.pem /etc/ssl/certs/51391683.0"
	I0229 18:57:57.416516   47919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136512.pem && ln -fs /usr/share/ca-certificates/136512.pem /etc/ssl/certs/136512.pem"
	I0229 18:57:57.429645   47919 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136512.pem
	I0229 18:57:57.434960   47919 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 17:49 /usr/share/ca-certificates/136512.pem
	I0229 18:57:57.435010   47919 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136512.pem
	I0229 18:57:57.441855   47919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136512.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 18:57:57.457277   47919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 18:57:57.471345   47919 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:57:57.476556   47919 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 17:40 /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:57:57.476629   47919 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:57:57.483318   47919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 18:57:57.496355   47919 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 18:57:57.501976   47919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0229 18:57:57.509611   47919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0229 18:57:57.516861   47919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0229 18:57:57.523819   47919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0229 18:57:57.530959   47919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0229 18:57:57.539788   47919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0229 18:57:57.548575   47919 kubeadm.go:404] StartCluster: {Name:old-k8s-version-631080 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-631080 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.83.214 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:57:57.548663   47919 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0229 18:57:57.548731   47919 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 18:57:57.596234   47919 cri.go:89] found id: ""
	I0229 18:57:57.596327   47919 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 18:57:57.612827   47919 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0229 18:57:57.612856   47919 kubeadm.go:636] restartCluster start
	I0229 18:57:57.612903   47919 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0229 18:57:57.627565   47919 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:57.629049   47919 kubeconfig.go:135] verify returned: extract IP: "old-k8s-version-631080" does not appear in /home/jenkins/minikube-integration/18259-6428/kubeconfig
	I0229 18:57:57.630139   47919 kubeconfig.go:146] "old-k8s-version-631080" context is missing from /home/jenkins/minikube-integration/18259-6428/kubeconfig - will repair!
	I0229 18:57:57.631735   47919 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/kubeconfig: {Name:mk7125f243525b7f0feb85371d9c568ed8e0cf7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:57:57.634318   47919 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0229 18:57:57.648383   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:57:57.648458   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:57.663708   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:58.149010   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:57:58.149086   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:58.164430   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:58.649075   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:57:58.649186   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:58.663768   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:59.149370   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:57:59.149450   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:59.165089   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:59.648609   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:57:59.648690   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:59.665224   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:56.182137   47608 pod_ready.go:102] pod "coredns-5dd5756b68-6b5pm" in "kube-system" namespace has status "Ready":"False"
	I0229 18:57:58.681550   47608 pod_ready.go:102] pod "coredns-5dd5756b68-6b5pm" in "kube-system" namespace has status "Ready":"False"
	I0229 18:57:59.517428   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:57:59.518040   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | unable to find current IP address of domain default-k8s-diff-port-153528 in network mk-default-k8s-diff-port-153528
	I0229 18:57:59.518069   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | I0229 18:57:59.517984   48864 retry.go:31] will retry after 2.738727804s: waiting for machine to come up
	I0229 18:58:02.258042   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:02.258540   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | unable to find current IP address of domain default-k8s-diff-port-153528 in network mk-default-k8s-diff-port-153528
	I0229 18:58:02.258569   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | I0229 18:58:02.258498   48864 retry.go:31] will retry after 2.520892118s: waiting for machine to come up
	I0229 18:58:00.148880   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:00.148969   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:00.168561   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:00.649227   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:00.649308   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:00.668162   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:01.148539   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:01.148600   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:01.168347   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:01.649392   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:01.649484   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:01.663974   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:02.149462   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:02.149548   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:02.164757   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:02.649398   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:02.649522   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:02.664014   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:03.148502   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:03.148718   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:03.165374   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:03.648528   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:03.648594   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:03.663305   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:04.148760   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:04.148847   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:04.163480   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:04.649122   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:04.649226   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:04.663556   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:01.179941   47608 pod_ready.go:102] pod "coredns-5dd5756b68-6b5pm" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:03.679523   47608 pod_ready.go:102] pod "coredns-5dd5756b68-6b5pm" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:04.179171   47608 pod_ready.go:92] pod "coredns-5dd5756b68-6b5pm" in "kube-system" namespace has status "Ready":"True"
	I0229 18:58:04.179198   47608 pod_ready.go:81] duration metric: took 12.507755709s waiting for pod "coredns-5dd5756b68-6b5pm" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:04.179212   47608 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-991128" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:04.184638   47608 pod_ready.go:92] pod "etcd-embed-certs-991128" in "kube-system" namespace has status "Ready":"True"
	I0229 18:58:04.184657   47608 pod_ready.go:81] duration metric: took 5.438559ms waiting for pod "etcd-embed-certs-991128" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:04.184665   47608 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-991128" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:04.189119   47608 pod_ready.go:92] pod "kube-apiserver-embed-certs-991128" in "kube-system" namespace has status "Ready":"True"
	I0229 18:58:04.189139   47608 pod_ready.go:81] duration metric: took 4.467998ms waiting for pod "kube-apiserver-embed-certs-991128" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:04.189147   47608 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-991128" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:04.193800   47608 pod_ready.go:92] pod "kube-controller-manager-embed-certs-991128" in "kube-system" namespace has status "Ready":"True"
	I0229 18:58:04.193819   47608 pod_ready.go:81] duration metric: took 4.66771ms waiting for pod "kube-controller-manager-embed-certs-991128" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:04.193827   47608 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zd7rf" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:04.198220   47608 pod_ready.go:92] pod "kube-proxy-zd7rf" in "kube-system" namespace has status "Ready":"True"
	I0229 18:58:04.198239   47608 pod_ready.go:81] duration metric: took 4.405824ms waiting for pod "kube-proxy-zd7rf" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:04.198246   47608 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-991128" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:04.575846   47608 pod_ready.go:92] pod "kube-scheduler-embed-certs-991128" in "kube-system" namespace has status "Ready":"True"
	I0229 18:58:04.575869   47608 pod_ready.go:81] duration metric: took 377.617228ms waiting for pod "kube-scheduler-embed-certs-991128" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:04.575878   47608 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:04.780871   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:04.781307   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | unable to find current IP address of domain default-k8s-diff-port-153528 in network mk-default-k8s-diff-port-153528
	I0229 18:58:04.781334   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | I0229 18:58:04.781266   48864 retry.go:31] will retry after 3.73331916s: waiting for machine to come up
	I0229 18:58:08.519173   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:08.519646   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Found IP for machine: 192.168.39.210
	I0229 18:58:08.519666   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Reserving static IP address...
	I0229 18:58:08.519687   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has current primary IP address 192.168.39.210 and MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:08.520011   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-153528", mac: "52:54:00:78:ec:2b", ip: "192.168.39.210"} in network mk-default-k8s-diff-port-153528: {Iface:virbr3 ExpiryTime:2024-02-29 19:58:02 +0000 UTC Type:0 Mac:52:54:00:78:ec:2b Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:default-k8s-diff-port-153528 Clientid:01:52:54:00:78:ec:2b}
	I0229 18:58:08.520032   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Reserved static IP address: 192.168.39.210
	I0229 18:58:08.520046   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | skip adding static IP to network mk-default-k8s-diff-port-153528 - found existing host DHCP lease matching {name: "default-k8s-diff-port-153528", mac: "52:54:00:78:ec:2b", ip: "192.168.39.210"}
	I0229 18:58:08.520057   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | Getting to WaitForSSH function...
	I0229 18:58:08.520067   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for SSH to be available...
	I0229 18:58:08.522047   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:08.522377   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ec:2b", ip: ""} in network mk-default-k8s-diff-port-153528: {Iface:virbr3 ExpiryTime:2024-02-29 19:58:02 +0000 UTC Type:0 Mac:52:54:00:78:ec:2b Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:default-k8s-diff-port-153528 Clientid:01:52:54:00:78:ec:2b}
	I0229 18:58:08.522411   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined IP address 192.168.39.210 and MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:08.522529   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | Using SSH client type: external
	I0229 18:58:08.522555   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | Using SSH private key: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/default-k8s-diff-port-153528/id_rsa (-rw-------)
	I0229 18:58:08.522592   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.210 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18259-6428/.minikube/machines/default-k8s-diff-port-153528/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 18:58:08.522606   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | About to run SSH command:
	I0229 18:58:08.522616   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | exit 0
	I0229 18:58:08.651113   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | SSH cmd err, output: <nil>: 
	I0229 18:58:08.651447   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetConfigRaw
	I0229 18:58:08.652078   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetIP
	I0229 18:58:08.654739   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:08.655191   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ec:2b", ip: ""} in network mk-default-k8s-diff-port-153528: {Iface:virbr3 ExpiryTime:2024-02-29 19:58:02 +0000 UTC Type:0 Mac:52:54:00:78:ec:2b Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:default-k8s-diff-port-153528 Clientid:01:52:54:00:78:ec:2b}
	I0229 18:58:08.655222   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined IP address 192.168.39.210 and MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:08.655516   48088 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/default-k8s-diff-port-153528/config.json ...
	I0229 18:58:08.655758   48088 machine.go:88] provisioning docker machine ...
	I0229 18:58:08.655787   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .DriverName
	I0229 18:58:08.655976   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetMachineName
	I0229 18:58:08.656109   48088 buildroot.go:166] provisioning hostname "default-k8s-diff-port-153528"
	I0229 18:58:08.656127   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetMachineName
	I0229 18:58:08.656273   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHHostname
	I0229 18:58:08.658580   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:08.658933   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ec:2b", ip: ""} in network mk-default-k8s-diff-port-153528: {Iface:virbr3 ExpiryTime:2024-02-29 19:58:02 +0000 UTC Type:0 Mac:52:54:00:78:ec:2b Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:default-k8s-diff-port-153528 Clientid:01:52:54:00:78:ec:2b}
	I0229 18:58:08.658961   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined IP address 192.168.39.210 and MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:08.659066   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHPort
	I0229 18:58:08.659255   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHKeyPath
	I0229 18:58:08.659419   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHKeyPath
	I0229 18:58:08.659547   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHUsername
	I0229 18:58:08.659714   48088 main.go:141] libmachine: Using SSH client type: native
	I0229 18:58:08.659933   48088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0229 18:58:08.659952   48088 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-153528 && echo "default-k8s-diff-port-153528" | sudo tee /etc/hostname
	I0229 18:58:08.782704   48088 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-153528
	
	I0229 18:58:08.782727   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHHostname
	I0229 18:58:08.785588   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:08.785939   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ec:2b", ip: ""} in network mk-default-k8s-diff-port-153528: {Iface:virbr3 ExpiryTime:2024-02-29 19:58:02 +0000 UTC Type:0 Mac:52:54:00:78:ec:2b Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:default-k8s-diff-port-153528 Clientid:01:52:54:00:78:ec:2b}
	I0229 18:58:08.785967   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined IP address 192.168.39.210 and MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:08.786107   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHPort
	I0229 18:58:08.786290   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHKeyPath
	I0229 18:58:08.786430   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHKeyPath
	I0229 18:58:08.786550   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHUsername
	I0229 18:58:08.786675   48088 main.go:141] libmachine: Using SSH client type: native
	I0229 18:58:08.786910   48088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0229 18:58:08.786937   48088 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-153528' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-153528/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-153528' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 18:58:08.906593   48088 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 18:58:08.906630   48088 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18259-6428/.minikube CaCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18259-6428/.minikube}
	I0229 18:58:08.906671   48088 buildroot.go:174] setting up certificates
	I0229 18:58:08.906683   48088 provision.go:83] configureAuth start
	I0229 18:58:08.906700   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetMachineName
	I0229 18:58:08.906992   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetIP
	I0229 18:58:08.909897   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:08.910266   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ec:2b", ip: ""} in network mk-default-k8s-diff-port-153528: {Iface:virbr3 ExpiryTime:2024-02-29 19:58:02 +0000 UTC Type:0 Mac:52:54:00:78:ec:2b Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:default-k8s-diff-port-153528 Clientid:01:52:54:00:78:ec:2b}
	I0229 18:58:08.910299   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined IP address 192.168.39.210 and MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:08.910420   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHHostname
	I0229 18:58:08.912899   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:08.913333   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ec:2b", ip: ""} in network mk-default-k8s-diff-port-153528: {Iface:virbr3 ExpiryTime:2024-02-29 19:58:02 +0000 UTC Type:0 Mac:52:54:00:78:ec:2b Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:default-k8s-diff-port-153528 Clientid:01:52:54:00:78:ec:2b}
	I0229 18:58:08.913363   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined IP address 192.168.39.210 and MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:08.913526   48088 provision.go:138] copyHostCerts
	I0229 18:58:08.913589   48088 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem, removing ...
	I0229 18:58:08.913602   48088 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem
	I0229 18:58:08.913671   48088 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem (1082 bytes)
	I0229 18:58:08.913796   48088 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem, removing ...
	I0229 18:58:08.913808   48088 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem
	I0229 18:58:08.913838   48088 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem (1123 bytes)
	I0229 18:58:08.913920   48088 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem, removing ...
	I0229 18:58:08.913940   48088 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem
	I0229 18:58:08.913969   48088 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem (1675 bytes)
	I0229 18:58:08.914052   48088 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-153528 san=[192.168.39.210 192.168.39.210 localhost 127.0.0.1 minikube default-k8s-diff-port-153528]
	I0229 18:58:09.033009   48088 provision.go:172] copyRemoteCerts
	I0229 18:58:09.033064   48088 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 18:58:09.033087   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHHostname
	I0229 18:58:09.035647   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:09.036023   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ec:2b", ip: ""} in network mk-default-k8s-diff-port-153528: {Iface:virbr3 ExpiryTime:2024-02-29 19:58:02 +0000 UTC Type:0 Mac:52:54:00:78:ec:2b Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:default-k8s-diff-port-153528 Clientid:01:52:54:00:78:ec:2b}
	I0229 18:58:09.036061   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined IP address 192.168.39.210 and MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:09.036262   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHPort
	I0229 18:58:09.036434   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHKeyPath
	I0229 18:58:09.036582   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHUsername
	I0229 18:58:09.036685   48088 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/default-k8s-diff-port-153528/id_rsa Username:docker}
	I0229 18:58:09.127168   48088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 18:58:09.162113   48088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0229 18:58:09.191657   48088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0229 18:58:09.224555   48088 provision.go:86] duration metric: configureAuth took 317.8564ms
	I0229 18:58:09.224589   48088 buildroot.go:189] setting minikube options for container-runtime
	I0229 18:58:09.224789   48088 config.go:182] Loaded profile config "default-k8s-diff-port-153528": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 18:58:09.224877   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHHostname
	I0229 18:58:09.227193   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:09.227549   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ec:2b", ip: ""} in network mk-default-k8s-diff-port-153528: {Iface:virbr3 ExpiryTime:2024-02-29 19:58:02 +0000 UTC Type:0 Mac:52:54:00:78:ec:2b Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:default-k8s-diff-port-153528 Clientid:01:52:54:00:78:ec:2b}
	I0229 18:58:09.227577   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined IP address 192.168.39.210 and MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:09.227731   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHPort
	I0229 18:58:09.227950   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHKeyPath
	I0229 18:58:09.228111   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHKeyPath
	I0229 18:58:09.228266   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHUsername
	I0229 18:58:09.228398   48088 main.go:141] libmachine: Using SSH client type: native
	I0229 18:58:09.228595   48088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0229 18:58:09.228617   48088 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0229 18:58:09.760261   47515 start.go:369] acquired machines lock for "no-preload-247197" in 59.368392801s
	I0229 18:58:09.760316   47515 start.go:96] Skipping create...Using existing machine configuration
	I0229 18:58:09.760326   47515 fix.go:54] fixHost starting: 
	I0229 18:58:09.760731   47515 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 18:58:09.760768   47515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:58:09.777304   47515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45123
	I0229 18:58:09.777781   47515 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:58:09.778277   47515 main.go:141] libmachine: Using API Version  1
	I0229 18:58:09.778301   47515 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:58:09.778655   47515 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:58:09.778829   47515 main.go:141] libmachine: (no-preload-247197) Calling .DriverName
	I0229 18:58:09.779012   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetState
	I0229 18:58:09.780644   47515 fix.go:102] recreateIfNeeded on no-preload-247197: state=Stopped err=<nil>
	I0229 18:58:09.780670   47515 main.go:141] libmachine: (no-preload-247197) Calling .DriverName
	W0229 18:58:09.780844   47515 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 18:58:09.782653   47515 out.go:177] * Restarting existing kvm2 VM for "no-preload-247197" ...
	I0229 18:58:05.149421   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:05.149514   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:05.164236   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:05.648767   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:05.648856   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:05.664890   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:06.148979   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:06.149069   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:06.165186   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:06.649135   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:06.649245   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:06.665357   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:07.148896   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:07.148978   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:07.163358   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:07.649238   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:07.649309   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:07.665329   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:07.665359   47919 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0229 18:58:07.665368   47919 kubeadm.go:1135] stopping kube-system containers ...
	I0229 18:58:07.665378   47919 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0229 18:58:07.665433   47919 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 18:58:07.713980   47919 cri.go:89] found id: ""
	I0229 18:58:07.714045   47919 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0229 18:58:07.740723   47919 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 18:58:07.753838   47919 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 18:58:07.753914   47919 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 18:58:07.767175   47919 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0229 18:58:07.767197   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:58:07.902881   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:58:08.741237   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:58:08.970287   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:58:09.099101   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:58:09.214816   47919 api_server.go:52] waiting for apiserver process to appear ...
	I0229 18:58:09.214897   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:09.715311   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:06.583750   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:09.083063   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:09.517694   48088 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0229 18:58:09.517720   48088 machine.go:91] provisioned docker machine in 861.950931ms
	I0229 18:58:09.517732   48088 start.go:300] post-start starting for "default-k8s-diff-port-153528" (driver="kvm2")
	I0229 18:58:09.517742   48088 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 18:58:09.517755   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .DriverName
	I0229 18:58:09.518097   48088 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 18:58:09.518134   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHHostname
	I0229 18:58:09.520915   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:09.521255   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ec:2b", ip: ""} in network mk-default-k8s-diff-port-153528: {Iface:virbr3 ExpiryTime:2024-02-29 19:58:02 +0000 UTC Type:0 Mac:52:54:00:78:ec:2b Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:default-k8s-diff-port-153528 Clientid:01:52:54:00:78:ec:2b}
	I0229 18:58:09.521285   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined IP address 192.168.39.210 and MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:09.521389   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHPort
	I0229 18:58:09.521590   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHKeyPath
	I0229 18:58:09.521761   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHUsername
	I0229 18:58:09.521911   48088 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/default-k8s-diff-port-153528/id_rsa Username:docker}
	I0229 18:58:09.606485   48088 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 18:58:09.611376   48088 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 18:58:09.611404   48088 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6428/.minikube/addons for local assets ...
	I0229 18:58:09.611468   48088 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6428/.minikube/files for local assets ...
	I0229 18:58:09.611564   48088 filesync.go:149] local asset: /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem -> 136512.pem in /etc/ssl/certs
	I0229 18:58:09.611679   48088 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 18:58:09.621573   48088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem --> /etc/ssl/certs/136512.pem (1708 bytes)
	I0229 18:58:09.648803   48088 start.go:303] post-start completed in 131.058856ms
	I0229 18:58:09.648825   48088 fix.go:56] fixHost completed within 20.839852585s
	I0229 18:58:09.648848   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHHostname
	I0229 18:58:09.651416   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:09.651743   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ec:2b", ip: ""} in network mk-default-k8s-diff-port-153528: {Iface:virbr3 ExpiryTime:2024-02-29 19:58:02 +0000 UTC Type:0 Mac:52:54:00:78:ec:2b Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:default-k8s-diff-port-153528 Clientid:01:52:54:00:78:ec:2b}
	I0229 18:58:09.651771   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined IP address 192.168.39.210 and MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:09.651917   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHPort
	I0229 18:58:09.652114   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHKeyPath
	I0229 18:58:09.652273   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHKeyPath
	I0229 18:58:09.652392   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHUsername
	I0229 18:58:09.652563   48088 main.go:141] libmachine: Using SSH client type: native
	I0229 18:58:09.652715   48088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0229 18:58:09.652728   48088 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 18:58:09.760132   48088 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709233089.743154671
	
	I0229 18:58:09.760154   48088 fix.go:206] guest clock: 1709233089.743154671
	I0229 18:58:09.760160   48088 fix.go:219] Guest: 2024-02-29 18:58:09.743154671 +0000 UTC Remote: 2024-02-29 18:58:09.648829212 +0000 UTC m=+270.421886207 (delta=94.325459ms)
	I0229 18:58:09.760177   48088 fix.go:190] guest clock delta is within tolerance: 94.325459ms
	I0229 18:58:09.760183   48088 start.go:83] releasing machines lock for "default-k8s-diff-port-153528", held for 20.951247697s
	I0229 18:58:09.760211   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .DriverName
	I0229 18:58:09.760473   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetIP
	I0229 18:58:09.763342   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:09.763701   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ec:2b", ip: ""} in network mk-default-k8s-diff-port-153528: {Iface:virbr3 ExpiryTime:2024-02-29 19:58:02 +0000 UTC Type:0 Mac:52:54:00:78:ec:2b Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:default-k8s-diff-port-153528 Clientid:01:52:54:00:78:ec:2b}
	I0229 18:58:09.763746   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined IP address 192.168.39.210 and MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:09.763896   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .DriverName
	I0229 18:58:09.764519   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .DriverName
	I0229 18:58:09.764720   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .DriverName
	I0229 18:58:09.764801   48088 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 18:58:09.764849   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHHostname
	I0229 18:58:09.764951   48088 ssh_runner.go:195] Run: cat /version.json
	I0229 18:58:09.764981   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHHostname
	I0229 18:58:09.767670   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:09.767861   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:09.768035   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ec:2b", ip: ""} in network mk-default-k8s-diff-port-153528: {Iface:virbr3 ExpiryTime:2024-02-29 19:58:02 +0000 UTC Type:0 Mac:52:54:00:78:ec:2b Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:default-k8s-diff-port-153528 Clientid:01:52:54:00:78:ec:2b}
	I0229 18:58:09.768054   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined IP address 192.168.39.210 and MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:09.768204   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHPort
	I0229 18:58:09.768322   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ec:2b", ip: ""} in network mk-default-k8s-diff-port-153528: {Iface:virbr3 ExpiryTime:2024-02-29 19:58:02 +0000 UTC Type:0 Mac:52:54:00:78:ec:2b Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:default-k8s-diff-port-153528 Clientid:01:52:54:00:78:ec:2b}
	I0229 18:58:09.768345   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined IP address 192.168.39.210 and MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:09.768347   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHKeyPath
	I0229 18:58:09.768504   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHUsername
	I0229 18:58:09.768518   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHPort
	I0229 18:58:09.768673   48088 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/default-k8s-diff-port-153528/id_rsa Username:docker}
	I0229 18:58:09.768694   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHKeyPath
	I0229 18:58:09.768890   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHUsername
	I0229 18:58:09.769024   48088 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/default-k8s-diff-port-153528/id_rsa Username:docker}
	I0229 18:58:09.849055   48088 ssh_runner.go:195] Run: systemctl --version
	I0229 18:58:09.872309   48088 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0229 18:58:10.015348   48088 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 18:58:10.023333   48088 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 18:58:10.023405   48088 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 18:58:10.042264   48088 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 18:58:10.042288   48088 start.go:475] detecting cgroup driver to use...
	I0229 18:58:10.042361   48088 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 18:58:10.062390   48088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 18:58:10.080651   48088 docker.go:217] disabling cri-docker service (if available) ...
	I0229 18:58:10.080714   48088 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 18:58:10.098478   48088 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 18:58:10.115610   48088 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 18:58:10.250212   48088 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 18:58:10.402800   48088 docker.go:233] disabling docker service ...
	I0229 18:58:10.402862   48088 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 18:58:10.419793   48088 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 18:58:10.435149   48088 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 18:58:10.589671   48088 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 18:58:10.714460   48088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 18:58:10.730820   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 18:58:10.753910   48088 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0229 18:58:10.753977   48088 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:58:10.766151   48088 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0229 18:58:10.766232   48088 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:58:10.778824   48088 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:58:10.792936   48088 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:58:10.810158   48088 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 18:58:10.828150   48088 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 18:58:10.843416   48088 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 18:58:10.843488   48088 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0229 18:58:10.866488   48088 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 18:58:10.880628   48088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:58:11.031221   48088 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0229 18:58:11.199068   48088 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0229 18:58:11.199143   48088 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0229 18:58:11.204851   48088 start.go:543] Will wait 60s for crictl version
	I0229 18:58:11.204922   48088 ssh_runner.go:195] Run: which crictl
	I0229 18:58:11.209384   48088 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 18:58:11.256928   48088 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0229 18:58:11.257014   48088 ssh_runner.go:195] Run: crio --version
	I0229 18:58:11.293338   48088 ssh_runner.go:195] Run: crio --version
	I0229 18:58:11.329107   48088 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0229 18:58:09.783970   47515 main.go:141] libmachine: (no-preload-247197) Calling .Start
	I0229 18:58:09.784127   47515 main.go:141] libmachine: (no-preload-247197) Ensuring networks are active...
	I0229 18:58:09.784926   47515 main.go:141] libmachine: (no-preload-247197) Ensuring network default is active
	I0229 18:58:09.785291   47515 main.go:141] libmachine: (no-preload-247197) Ensuring network mk-no-preload-247197 is active
	I0229 18:58:09.785654   47515 main.go:141] libmachine: (no-preload-247197) Getting domain xml...
	I0229 18:58:09.786319   47515 main.go:141] libmachine: (no-preload-247197) Creating domain...
	I0229 18:58:11.102135   47515 main.go:141] libmachine: (no-preload-247197) Waiting to get IP...
	I0229 18:58:11.102911   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:11.103333   47515 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:58:11.103414   47515 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:58:11.103321   49001 retry.go:31] will retry after 205.990392ms: waiting for machine to come up
	I0229 18:58:11.310742   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:11.311298   47515 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:58:11.311327   47515 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:58:11.311247   49001 retry.go:31] will retry after 353.756736ms: waiting for machine to come up
	I0229 18:58:11.666882   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:11.667361   47515 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:58:11.667392   47515 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:58:11.667319   49001 retry.go:31] will retry after 308.284801ms: waiting for machine to come up
	I0229 18:58:11.976805   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:11.977355   47515 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:58:11.977385   47515 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:58:11.977309   49001 retry.go:31] will retry after 481.108836ms: waiting for machine to come up
	I0229 18:58:12.459764   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:12.460292   47515 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:58:12.460330   47515 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:58:12.460253   49001 retry.go:31] will retry after 549.22451ms: waiting for machine to come up
	I0229 18:58:11.330594   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetIP
	I0229 18:58:11.333628   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:11.334080   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ec:2b", ip: ""} in network mk-default-k8s-diff-port-153528: {Iface:virbr3 ExpiryTime:2024-02-29 19:58:02 +0000 UTC Type:0 Mac:52:54:00:78:ec:2b Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:default-k8s-diff-port-153528 Clientid:01:52:54:00:78:ec:2b}
	I0229 18:58:11.334112   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined IP address 192.168.39.210 and MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:11.334361   48088 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0229 18:58:11.339127   48088 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:58:11.353078   48088 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0229 18:58:11.353129   48088 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 18:58:11.392503   48088 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0229 18:58:11.392573   48088 ssh_runner.go:195] Run: which lz4
	I0229 18:58:11.398589   48088 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0229 18:58:11.405052   48088 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 18:58:11.405091   48088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0229 18:58:13.428402   48088 crio.go:444] Took 2.029836 seconds to copy over tarball
	I0229 18:58:13.428481   48088 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 18:58:10.215640   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:10.715115   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:11.215866   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:11.715307   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:12.215171   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:12.715206   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:13.215153   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:13.715048   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:14.215148   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:14.715628   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:11.084645   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:13.087354   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:13.011239   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:13.011724   47515 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:58:13.011751   47515 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:58:13.011676   49001 retry.go:31] will retry after 662.346902ms: waiting for machine to come up
	I0229 18:58:13.675622   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:13.676179   47515 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:58:13.676208   47515 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:58:13.676115   49001 retry.go:31] will retry after 761.484123ms: waiting for machine to come up
	I0229 18:58:14.439091   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:14.439599   47515 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:58:14.439626   47515 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:58:14.439546   49001 retry.go:31] will retry after 980.352556ms: waiting for machine to come up
	I0229 18:58:15.421962   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:15.422377   47515 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:58:15.422405   47515 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:58:15.422314   49001 retry.go:31] will retry after 1.134741057s: waiting for machine to come up
	I0229 18:58:16.558585   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:16.559071   47515 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:58:16.559097   47515 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:58:16.559005   49001 retry.go:31] will retry after 2.299052603s: waiting for machine to come up
	I0229 18:58:16.327243   48088 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.898733984s)
	I0229 18:58:16.327277   48088 crio.go:451] Took 2.898846 seconds to extract the tarball
	I0229 18:58:16.327289   48088 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 18:58:16.374029   48088 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 18:58:16.425625   48088 crio.go:496] all images are preloaded for cri-o runtime.
	I0229 18:58:16.425654   48088 cache_images.go:84] Images are preloaded, skipping loading
	I0229 18:58:16.425740   48088 ssh_runner.go:195] Run: crio config
	I0229 18:58:16.477353   48088 cni.go:84] Creating CNI manager for ""
	I0229 18:58:16.477382   48088 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 18:58:16.477406   48088 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 18:58:16.477447   48088 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.210 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-153528 NodeName:default-k8s-diff-port-153528 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.210"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.210 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 18:58:16.477595   48088 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.210
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-153528"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.210
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.210"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 18:58:16.477659   48088 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-153528 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.210
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-153528 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0229 18:58:16.477718   48088 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0229 18:58:16.489240   48088 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 18:58:16.489301   48088 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 18:58:16.500764   48088 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0229 18:58:16.522927   48088 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 18:58:16.543902   48088 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I0229 18:58:16.565262   48088 ssh_runner.go:195] Run: grep 192.168.39.210	control-plane.minikube.internal$ /etc/hosts
	I0229 18:58:16.571163   48088 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.210	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:58:16.585476   48088 certs.go:56] Setting up /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/default-k8s-diff-port-153528 for IP: 192.168.39.210
	I0229 18:58:16.585507   48088 certs.go:190] acquiring lock for shared ca certs: {Name:mk3505d2468da66ac20dc3ebf913782cecf1ba0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:58:16.585657   48088 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18259-6428/.minikube/ca.key
	I0229 18:58:16.585704   48088 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.key
	I0229 18:58:16.585772   48088 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/default-k8s-diff-port-153528/client.key
	I0229 18:58:16.647093   48088 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/default-k8s-diff-port-153528/apiserver.key.6213553a
	I0229 18:58:16.647194   48088 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/default-k8s-diff-port-153528/proxy-client.key
	I0229 18:58:16.647398   48088 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651.pem (1338 bytes)
	W0229 18:58:16.647463   48088 certs.go:433] ignoring /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651_empty.pem, impossibly tiny 0 bytes
	I0229 18:58:16.647476   48088 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem (1675 bytes)
	I0229 18:58:16.647501   48088 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem (1082 bytes)
	I0229 18:58:16.647527   48088 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem (1123 bytes)
	I0229 18:58:16.647553   48088 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem (1675 bytes)
	I0229 18:58:16.647591   48088 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem (1708 bytes)
	I0229 18:58:16.648235   48088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/default-k8s-diff-port-153528/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 18:58:16.678452   48088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/default-k8s-diff-port-153528/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0229 18:58:16.708360   48088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/default-k8s-diff-port-153528/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 18:58:16.740905   48088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/default-k8s-diff-port-153528/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 18:58:16.768820   48088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 18:58:16.799459   48088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0229 18:58:16.829488   48088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 18:58:16.860881   48088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0229 18:58:16.893064   48088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 18:58:16.923404   48088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651.pem --> /usr/share/ca-certificates/13651.pem (1338 bytes)
	I0229 18:58:16.952531   48088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem --> /usr/share/ca-certificates/136512.pem (1708 bytes)
	I0229 18:58:16.980895   48088 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 18:58:17.001306   48088 ssh_runner.go:195] Run: openssl version
	I0229 18:58:17.007995   48088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136512.pem && ln -fs /usr/share/ca-certificates/136512.pem /etc/ssl/certs/136512.pem"
	I0229 18:58:17.024000   48088 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136512.pem
	I0229 18:58:17.030471   48088 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 17:49 /usr/share/ca-certificates/136512.pem
	I0229 18:58:17.030544   48088 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136512.pem
	I0229 18:58:17.038306   48088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136512.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 18:58:17.050985   48088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 18:58:17.063089   48088 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:58:17.068437   48088 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 17:40 /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:58:17.068485   48088 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:58:17.075156   48088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 18:58:17.087015   48088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13651.pem && ln -fs /usr/share/ca-certificates/13651.pem /etc/ssl/certs/13651.pem"
	I0229 18:58:17.099964   48088 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13651.pem
	I0229 18:58:17.105272   48088 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 17:49 /usr/share/ca-certificates/13651.pem
	I0229 18:58:17.105333   48088 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13651.pem
	I0229 18:58:17.112447   48088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13651.pem /etc/ssl/certs/51391683.0"
	I0229 18:58:17.126499   48088 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 18:58:17.133216   48088 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0229 18:58:17.140320   48088 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0229 18:58:17.147900   48088 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0229 18:58:17.154931   48088 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0229 18:58:17.163552   48088 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0229 18:58:17.172256   48088 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0229 18:58:17.181349   48088 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-153528 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-153528 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.210 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:58:17.181481   48088 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0229 18:58:17.181554   48088 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 18:58:17.227444   48088 cri.go:89] found id: ""
	I0229 18:58:17.227532   48088 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 18:58:17.242533   48088 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0229 18:58:17.242562   48088 kubeadm.go:636] restartCluster start
	I0229 18:58:17.242616   48088 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0229 18:58:17.254713   48088 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:17.256305   48088 kubeconfig.go:92] found "default-k8s-diff-port-153528" server: "https://192.168.39.210:8444"
	I0229 18:58:17.259432   48088 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0229 18:58:17.281454   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:17.281525   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:17.295342   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:17.781719   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:17.781807   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:17.797462   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:18.281981   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:18.282082   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:18.300449   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:18.781952   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:18.782024   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:18.796641   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:15.215935   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:15.714969   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:16.215921   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:16.715200   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:17.215151   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:17.715520   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:18.215291   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:18.715662   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:19.215157   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:19.715037   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:15.585143   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:18.086077   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:18.861140   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:18.861635   47515 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:58:18.861658   47515 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:58:18.861584   49001 retry.go:31] will retry after 2.115098542s: waiting for machine to come up
	I0229 18:58:20.978165   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:20.978625   47515 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:58:20.978658   47515 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:58:20.978570   49001 retry.go:31] will retry after 3.520116791s: waiting for machine to come up
	I0229 18:58:19.282008   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:19.282093   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:19.297806   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:19.782384   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:19.782465   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:19.802496   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:20.281712   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:20.281777   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:20.298545   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:20.782139   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:20.782249   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:20.799615   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:21.282180   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:21.282288   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:21.297649   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:21.782263   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:21.782341   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:21.797537   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:22.282131   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:22.282211   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:22.303084   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:22.781558   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:22.781645   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:22.797155   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:23.281645   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:23.281727   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:23.296059   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:23.781581   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:23.781663   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:23.797132   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:20.215501   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:20.715745   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:21.214953   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:21.715762   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:22.215608   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:22.715556   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:23.215633   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:23.715012   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:24.215182   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:24.715944   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:20.585475   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:22.586962   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:25.082804   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:24.503134   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:24.503537   47515 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:58:24.503561   47515 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:58:24.503495   49001 retry.go:31] will retry after 3.056941725s: waiting for machine to come up
	I0229 18:58:27.562228   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:27.562698   47515 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:58:27.562729   47515 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:58:27.562650   49001 retry.go:31] will retry after 5.535128197s: waiting for machine to come up
	I0229 18:58:24.282207   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:24.282273   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:24.298683   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:24.781997   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:24.782088   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:24.796544   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:25.282137   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:25.282249   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:25.297916   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:25.782489   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:25.782605   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:25.800171   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:26.281679   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:26.281764   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:26.296395   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:26.781581   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:26.781700   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:26.796380   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:27.282230   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:27.282319   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:27.300719   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:27.300745   48088 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0229 18:58:27.300753   48088 kubeadm.go:1135] stopping kube-system containers ...
	I0229 18:58:27.300762   48088 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0229 18:58:27.300822   48088 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 18:58:27.344465   48088 cri.go:89] found id: ""
	I0229 18:58:27.344525   48088 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0229 18:58:27.367244   48088 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 18:58:27.379831   48088 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 18:58:27.379895   48088 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 18:58:27.390372   48088 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0229 18:58:27.390393   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:58:27.521441   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:58:28.070547   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:58:28.324425   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:58:28.416807   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:58:28.485785   48088 api_server.go:52] waiting for apiserver process to appear ...
	I0229 18:58:28.485880   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:28.986473   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:25.215272   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:25.715667   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:26.215566   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:26.715860   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:27.214993   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:27.715679   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:28.215093   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:28.715081   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:29.215188   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:29.715981   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:27.585150   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:29.585716   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:29.486136   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:29.512004   48088 api_server.go:72] duration metric: took 1.026225672s to wait for apiserver process to appear ...
	I0229 18:58:29.512036   48088 api_server.go:88] waiting for apiserver healthz status ...
	I0229 18:58:29.512081   48088 api_server.go:253] Checking apiserver healthz at https://192.168.39.210:8444/healthz ...
	I0229 18:58:29.512602   48088 api_server.go:269] stopped: https://192.168.39.210:8444/healthz: Get "https://192.168.39.210:8444/healthz": dial tcp 192.168.39.210:8444: connect: connection refused
	I0229 18:58:30.012197   48088 api_server.go:253] Checking apiserver healthz at https://192.168.39.210:8444/healthz ...
	I0229 18:58:33.076090   48088 api_server.go:279] https://192.168.39.210:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 18:58:33.076122   48088 api_server.go:103] status: https://192.168.39.210:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 18:58:33.076141   48088 api_server.go:253] Checking apiserver healthz at https://192.168.39.210:8444/healthz ...
	I0229 18:58:33.115044   48088 api_server.go:279] https://192.168.39.210:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 18:58:33.115082   48088 api_server.go:103] status: https://192.168.39.210:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 18:58:33.512305   48088 api_server.go:253] Checking apiserver healthz at https://192.168.39.210:8444/healthz ...
	I0229 18:58:33.518615   48088 api_server.go:279] https://192.168.39.210:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 18:58:33.518640   48088 api_server.go:103] status: https://192.168.39.210:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 18:58:34.012514   48088 api_server.go:253] Checking apiserver healthz at https://192.168.39.210:8444/healthz ...
	I0229 18:58:34.024771   48088 api_server.go:279] https://192.168.39.210:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 18:58:34.024809   48088 api_server.go:103] status: https://192.168.39.210:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 18:58:34.512427   48088 api_server.go:253] Checking apiserver healthz at https://192.168.39.210:8444/healthz ...
	I0229 18:58:34.519703   48088 api_server.go:279] https://192.168.39.210:8444/healthz returned 200:
	ok
	I0229 18:58:34.527814   48088 api_server.go:141] control plane version: v1.28.4
	I0229 18:58:34.527850   48088 api_server.go:131] duration metric: took 5.015799681s to wait for apiserver health ...
	I0229 18:58:34.527862   48088 cni.go:84] Creating CNI manager for ""
	I0229 18:58:34.527869   48088 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 18:58:34.529573   48088 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 18:58:30.215544   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:30.715080   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:31.215386   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:31.715180   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:32.215078   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:32.715087   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:33.215842   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:33.714950   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:34.215778   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:34.715201   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:32.084243   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:34.087247   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:33.099983   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:33.100527   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has current primary IP address 192.168.50.72 and MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:33.100548   47515 main.go:141] libmachine: (no-preload-247197) Found IP for machine: 192.168.50.72
	I0229 18:58:33.100584   47515 main.go:141] libmachine: (no-preload-247197) Reserving static IP address...
	I0229 18:58:33.100959   47515 main.go:141] libmachine: (no-preload-247197) DBG | found host DHCP lease matching {name: "no-preload-247197", mac: "52:54:00:2c:2f:53", ip: "192.168.50.72"} in network mk-no-preload-247197: {Iface:virbr2 ExpiryTime:2024-02-29 19:58:22 +0000 UTC Type:0 Mac:52:54:00:2c:2f:53 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:no-preload-247197 Clientid:01:52:54:00:2c:2f:53}
	I0229 18:58:33.100985   47515 main.go:141] libmachine: (no-preload-247197) DBG | skip adding static IP to network mk-no-preload-247197 - found existing host DHCP lease matching {name: "no-preload-247197", mac: "52:54:00:2c:2f:53", ip: "192.168.50.72"}
	I0229 18:58:33.100999   47515 main.go:141] libmachine: (no-preload-247197) Reserved static IP address: 192.168.50.72
	I0229 18:58:33.101016   47515 main.go:141] libmachine: (no-preload-247197) Waiting for SSH to be available...
	I0229 18:58:33.101057   47515 main.go:141] libmachine: (no-preload-247197) DBG | Getting to WaitForSSH function...
	I0229 18:58:33.103439   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:33.103766   47515 main.go:141] libmachine: (no-preload-247197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2f:53", ip: ""} in network mk-no-preload-247197: {Iface:virbr2 ExpiryTime:2024-02-29 19:58:22 +0000 UTC Type:0 Mac:52:54:00:2c:2f:53 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:no-preload-247197 Clientid:01:52:54:00:2c:2f:53}
	I0229 18:58:33.103817   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined IP address 192.168.50.72 and MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:33.104041   47515 main.go:141] libmachine: (no-preload-247197) DBG | Using SSH client type: external
	I0229 18:58:33.104069   47515 main.go:141] libmachine: (no-preload-247197) DBG | Using SSH private key: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/no-preload-247197/id_rsa (-rw-------)
	I0229 18:58:33.104110   47515 main.go:141] libmachine: (no-preload-247197) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.72 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18259-6428/.minikube/machines/no-preload-247197/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 18:58:33.104131   47515 main.go:141] libmachine: (no-preload-247197) DBG | About to run SSH command:
	I0229 18:58:33.104145   47515 main.go:141] libmachine: (no-preload-247197) DBG | exit 0
	I0229 18:58:33.240401   47515 main.go:141] libmachine: (no-preload-247197) DBG | SSH cmd err, output: <nil>: 
	I0229 18:58:33.240811   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetConfigRaw
	I0229 18:58:33.241500   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetIP
	I0229 18:58:33.244578   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:33.244970   47515 main.go:141] libmachine: (no-preload-247197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2f:53", ip: ""} in network mk-no-preload-247197: {Iface:virbr2 ExpiryTime:2024-02-29 19:58:22 +0000 UTC Type:0 Mac:52:54:00:2c:2f:53 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:no-preload-247197 Clientid:01:52:54:00:2c:2f:53}
	I0229 18:58:33.245002   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined IP address 192.168.50.72 and MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:33.245358   47515 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/no-preload-247197/config.json ...
	I0229 18:58:33.245522   47515 machine.go:88] provisioning docker machine ...
	I0229 18:58:33.245542   47515 main.go:141] libmachine: (no-preload-247197) Calling .DriverName
	I0229 18:58:33.245755   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetMachineName
	I0229 18:58:33.245935   47515 buildroot.go:166] provisioning hostname "no-preload-247197"
	I0229 18:58:33.245977   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetMachineName
	I0229 18:58:33.246175   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHHostname
	I0229 18:58:33.248841   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:33.249263   47515 main.go:141] libmachine: (no-preload-247197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2f:53", ip: ""} in network mk-no-preload-247197: {Iface:virbr2 ExpiryTime:2024-02-29 19:58:22 +0000 UTC Type:0 Mac:52:54:00:2c:2f:53 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:no-preload-247197 Clientid:01:52:54:00:2c:2f:53}
	I0229 18:58:33.249284   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined IP address 192.168.50.72 and MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:33.249458   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHPort
	I0229 18:58:33.249629   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHKeyPath
	I0229 18:58:33.249767   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHKeyPath
	I0229 18:58:33.249946   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHUsername
	I0229 18:58:33.250120   47515 main.go:141] libmachine: Using SSH client type: native
	I0229 18:58:33.250335   47515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.72 22 <nil> <nil>}
	I0229 18:58:33.250351   47515 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-247197 && echo "no-preload-247197" | sudo tee /etc/hostname
	I0229 18:58:33.386175   47515 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-247197
	
	I0229 18:58:33.386210   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHHostname
	I0229 18:58:33.389491   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:33.389909   47515 main.go:141] libmachine: (no-preload-247197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2f:53", ip: ""} in network mk-no-preload-247197: {Iface:virbr2 ExpiryTime:2024-02-29 19:58:22 +0000 UTC Type:0 Mac:52:54:00:2c:2f:53 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:no-preload-247197 Clientid:01:52:54:00:2c:2f:53}
	I0229 18:58:33.389950   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined IP address 192.168.50.72 and MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:33.390080   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHPort
	I0229 18:58:33.390288   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHKeyPath
	I0229 18:58:33.390495   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHKeyPath
	I0229 18:58:33.390678   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHUsername
	I0229 18:58:33.390844   47515 main.go:141] libmachine: Using SSH client type: native
	I0229 18:58:33.391058   47515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.72 22 <nil> <nil>}
	I0229 18:58:33.391090   47515 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-247197' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-247197/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-247197' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 18:58:33.510209   47515 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 18:58:33.510243   47515 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18259-6428/.minikube CaCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18259-6428/.minikube}
	I0229 18:58:33.510263   47515 buildroot.go:174] setting up certificates
	I0229 18:58:33.510273   47515 provision.go:83] configureAuth start
	I0229 18:58:33.510281   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetMachineName
	I0229 18:58:33.510582   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetIP
	I0229 18:58:33.513357   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:33.513741   47515 main.go:141] libmachine: (no-preload-247197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2f:53", ip: ""} in network mk-no-preload-247197: {Iface:virbr2 ExpiryTime:2024-02-29 19:58:22 +0000 UTC Type:0 Mac:52:54:00:2c:2f:53 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:no-preload-247197 Clientid:01:52:54:00:2c:2f:53}
	I0229 18:58:33.513769   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined IP address 192.168.50.72 and MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:33.513936   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHHostname
	I0229 18:58:33.516227   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:33.516513   47515 main.go:141] libmachine: (no-preload-247197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2f:53", ip: ""} in network mk-no-preload-247197: {Iface:virbr2 ExpiryTime:2024-02-29 19:58:22 +0000 UTC Type:0 Mac:52:54:00:2c:2f:53 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:no-preload-247197 Clientid:01:52:54:00:2c:2f:53}
	I0229 18:58:33.516543   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined IP address 192.168.50.72 and MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:33.516700   47515 provision.go:138] copyHostCerts
	I0229 18:58:33.516746   47515 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem, removing ...
	I0229 18:58:33.516761   47515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem
	I0229 18:58:33.516824   47515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem (1123 bytes)
	I0229 18:58:33.516931   47515 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem, removing ...
	I0229 18:58:33.516952   47515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem
	I0229 18:58:33.516987   47515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem (1675 bytes)
	I0229 18:58:33.517066   47515 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem, removing ...
	I0229 18:58:33.517077   47515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem
	I0229 18:58:33.517106   47515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem (1082 bytes)
	I0229 18:58:33.517181   47515 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem org=jenkins.no-preload-247197 san=[192.168.50.72 192.168.50.72 localhost 127.0.0.1 minikube no-preload-247197]
	I0229 18:58:33.651858   47515 provision.go:172] copyRemoteCerts
	I0229 18:58:33.651914   47515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 18:58:33.651936   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHHostname
	I0229 18:58:33.655072   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:33.655551   47515 main.go:141] libmachine: (no-preload-247197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2f:53", ip: ""} in network mk-no-preload-247197: {Iface:virbr2 ExpiryTime:2024-02-29 19:58:22 +0000 UTC Type:0 Mac:52:54:00:2c:2f:53 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:no-preload-247197 Clientid:01:52:54:00:2c:2f:53}
	I0229 18:58:33.655584   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined IP address 192.168.50.72 and MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:33.655776   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHPort
	I0229 18:58:33.655952   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHKeyPath
	I0229 18:58:33.656103   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHUsername
	I0229 18:58:33.656277   47515 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/no-preload-247197/id_rsa Username:docker}
	I0229 18:58:33.747197   47515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0229 18:58:33.776690   47515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0229 18:58:33.804404   47515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 18:58:33.831068   47515 provision.go:86] duration metric: configureAuth took 320.782451ms
	I0229 18:58:33.831105   47515 buildroot.go:189] setting minikube options for container-runtime
	I0229 18:58:33.831336   47515 config.go:182] Loaded profile config "no-preload-247197": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0229 18:58:33.831469   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHHostname
	I0229 18:58:33.834209   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:33.834617   47515 main.go:141] libmachine: (no-preload-247197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2f:53", ip: ""} in network mk-no-preload-247197: {Iface:virbr2 ExpiryTime:2024-02-29 19:58:22 +0000 UTC Type:0 Mac:52:54:00:2c:2f:53 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:no-preload-247197 Clientid:01:52:54:00:2c:2f:53}
	I0229 18:58:33.834650   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined IP address 192.168.50.72 and MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:33.834845   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHPort
	I0229 18:58:33.835046   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHKeyPath
	I0229 18:58:33.835215   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHKeyPath
	I0229 18:58:33.835343   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHUsername
	I0229 18:58:33.835503   47515 main.go:141] libmachine: Using SSH client type: native
	I0229 18:58:33.835694   47515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.72 22 <nil> <nil>}
	I0229 18:58:33.835717   47515 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0229 18:58:34.141350   47515 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0229 18:58:34.141372   47515 machine.go:91] provisioned docker machine in 895.837431ms
	I0229 18:58:34.141385   47515 start.go:300] post-start starting for "no-preload-247197" (driver="kvm2")
	I0229 18:58:34.141399   47515 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 18:58:34.141422   47515 main.go:141] libmachine: (no-preload-247197) Calling .DriverName
	I0229 18:58:34.141763   47515 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 18:58:34.141800   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHHostname
	I0229 18:58:34.144673   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:34.145078   47515 main.go:141] libmachine: (no-preload-247197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2f:53", ip: ""} in network mk-no-preload-247197: {Iface:virbr2 ExpiryTime:2024-02-29 19:58:22 +0000 UTC Type:0 Mac:52:54:00:2c:2f:53 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:no-preload-247197 Clientid:01:52:54:00:2c:2f:53}
	I0229 18:58:34.145106   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined IP address 192.168.50.72 and MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:34.145225   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHPort
	I0229 18:58:34.145387   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHKeyPath
	I0229 18:58:34.145509   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHUsername
	I0229 18:58:34.145618   47515 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/no-preload-247197/id_rsa Username:docker}
	I0229 18:58:34.241817   47515 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 18:58:34.247096   47515 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 18:58:34.247125   47515 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6428/.minikube/addons for local assets ...
	I0229 18:58:34.247200   47515 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6428/.minikube/files for local assets ...
	I0229 18:58:34.247294   47515 filesync.go:149] local asset: /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem -> 136512.pem in /etc/ssl/certs
	I0229 18:58:34.247386   47515 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 18:58:34.261959   47515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem --> /etc/ssl/certs/136512.pem (1708 bytes)
	I0229 18:58:34.293974   47515 start.go:303] post-start completed in 152.574202ms
	I0229 18:58:34.294000   47515 fix.go:56] fixHost completed within 24.533673806s
	I0229 18:58:34.294031   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHHostname
	I0229 18:58:34.297066   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:34.297455   47515 main.go:141] libmachine: (no-preload-247197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2f:53", ip: ""} in network mk-no-preload-247197: {Iface:virbr2 ExpiryTime:2024-02-29 19:58:22 +0000 UTC Type:0 Mac:52:54:00:2c:2f:53 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:no-preload-247197 Clientid:01:52:54:00:2c:2f:53}
	I0229 18:58:34.297480   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined IP address 192.168.50.72 and MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:34.297685   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHPort
	I0229 18:58:34.297865   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHKeyPath
	I0229 18:58:34.298064   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHKeyPath
	I0229 18:58:34.298256   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHUsername
	I0229 18:58:34.298448   47515 main.go:141] libmachine: Using SSH client type: native
	I0229 18:58:34.298671   47515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.72 22 <nil> <nil>}
	I0229 18:58:34.298687   47515 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 18:58:34.416701   47515 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709233114.391433365
	
	I0229 18:58:34.416724   47515 fix.go:206] guest clock: 1709233114.391433365
	I0229 18:58:34.416733   47515 fix.go:219] Guest: 2024-02-29 18:58:34.391433365 +0000 UTC Remote: 2024-02-29 18:58:34.294005249 +0000 UTC m=+366.458860154 (delta=97.428116ms)
	I0229 18:58:34.416763   47515 fix.go:190] guest clock delta is within tolerance: 97.428116ms
	I0229 18:58:34.416770   47515 start.go:83] releasing machines lock for "no-preload-247197", held for 24.656479144s
	I0229 18:58:34.416795   47515 main.go:141] libmachine: (no-preload-247197) Calling .DriverName
	I0229 18:58:34.417031   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetIP
	I0229 18:58:34.419713   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:34.420098   47515 main.go:141] libmachine: (no-preload-247197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2f:53", ip: ""} in network mk-no-preload-247197: {Iface:virbr2 ExpiryTime:2024-02-29 19:58:22 +0000 UTC Type:0 Mac:52:54:00:2c:2f:53 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:no-preload-247197 Clientid:01:52:54:00:2c:2f:53}
	I0229 18:58:34.420129   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined IP address 192.168.50.72 and MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:34.420288   47515 main.go:141] libmachine: (no-preload-247197) Calling .DriverName
	I0229 18:58:34.420789   47515 main.go:141] libmachine: (no-preload-247197) Calling .DriverName
	I0229 18:58:34.420989   47515 main.go:141] libmachine: (no-preload-247197) Calling .DriverName
	I0229 18:58:34.421076   47515 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 18:58:34.421125   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHHostname
	I0229 18:58:34.421239   47515 ssh_runner.go:195] Run: cat /version.json
	I0229 18:58:34.421268   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHHostname
	I0229 18:58:34.424047   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:34.424359   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:34.424399   47515 main.go:141] libmachine: (no-preload-247197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2f:53", ip: ""} in network mk-no-preload-247197: {Iface:virbr2 ExpiryTime:2024-02-29 19:58:22 +0000 UTC Type:0 Mac:52:54:00:2c:2f:53 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:no-preload-247197 Clientid:01:52:54:00:2c:2f:53}
	I0229 18:58:34.424418   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined IP address 192.168.50.72 and MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:34.424564   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHPort
	I0229 18:58:34.424731   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHKeyPath
	I0229 18:58:34.424803   47515 main.go:141] libmachine: (no-preload-247197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2f:53", ip: ""} in network mk-no-preload-247197: {Iface:virbr2 ExpiryTime:2024-02-29 19:58:22 +0000 UTC Type:0 Mac:52:54:00:2c:2f:53 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:no-preload-247197 Clientid:01:52:54:00:2c:2f:53}
	I0229 18:58:34.424829   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined IP address 192.168.50.72 and MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:34.424969   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHUsername
	I0229 18:58:34.425124   47515 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/no-preload-247197/id_rsa Username:docker}
	I0229 18:58:34.425217   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHPort
	I0229 18:58:34.425348   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHKeyPath
	I0229 18:58:34.425506   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHUsername
	I0229 18:58:34.425705   47515 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/no-preload-247197/id_rsa Username:docker}
	I0229 18:58:34.505253   47515 ssh_runner.go:195] Run: systemctl --version
	I0229 18:58:34.533780   47515 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0229 18:58:34.696609   47515 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 18:58:34.703768   47515 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 18:58:34.703848   47515 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 18:58:34.723243   47515 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 18:58:34.723271   47515 start.go:475] detecting cgroup driver to use...
	I0229 18:58:34.723342   47515 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 18:58:34.743696   47515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 18:58:34.760022   47515 docker.go:217] disabling cri-docker service (if available) ...
	I0229 18:58:34.760085   47515 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 18:58:34.775217   47515 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 18:58:34.791576   47515 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 18:58:34.920544   47515 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 18:58:35.093684   47515 docker.go:233] disabling docker service ...
	I0229 18:58:35.093760   47515 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 18:58:35.112349   47515 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 18:58:35.128145   47515 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 18:58:35.246120   47515 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 18:58:35.363110   47515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 18:58:35.378087   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 18:58:35.399610   47515 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0229 18:58:35.399658   47515 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:58:35.410579   47515 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0229 18:58:35.410624   47515 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:58:35.421664   47515 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:58:35.432726   47515 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:58:35.443728   47515 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 18:58:35.455072   47515 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 18:58:35.467211   47515 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 18:58:35.467263   47515 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0229 18:58:35.480669   47515 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 18:58:35.491649   47515 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:58:35.621272   47515 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0229 18:58:35.793148   47515 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0229 18:58:35.793225   47515 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0229 18:58:35.798495   47515 start.go:543] Will wait 60s for crictl version
	I0229 18:58:35.798556   47515 ssh_runner.go:195] Run: which crictl
	I0229 18:58:35.803756   47515 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 18:58:35.848168   47515 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0229 18:58:35.848259   47515 ssh_runner.go:195] Run: crio --version
	I0229 18:58:35.879346   47515 ssh_runner.go:195] Run: crio --version
	I0229 18:58:35.911939   47515 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0229 18:58:35.913174   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetIP
	I0229 18:58:35.915761   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:35.916134   47515 main.go:141] libmachine: (no-preload-247197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2f:53", ip: ""} in network mk-no-preload-247197: {Iface:virbr2 ExpiryTime:2024-02-29 19:58:22 +0000 UTC Type:0 Mac:52:54:00:2c:2f:53 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:no-preload-247197 Clientid:01:52:54:00:2c:2f:53}
	I0229 18:58:35.916162   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined IP address 192.168.50.72 and MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:35.916350   47515 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0229 18:58:35.921206   47515 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:58:35.936342   47515 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0229 18:58:35.936375   47515 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 18:58:35.974456   47515 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0229 18:58:35.974475   47515 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0229 18:58:35.974509   47515 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:58:35.974546   47515 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0229 18:58:35.974567   47515 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0229 18:58:35.974613   47515 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0229 18:58:35.974668   47515 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0229 18:58:35.974733   47515 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0229 18:58:35.974780   47515 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0229 18:58:35.975073   47515 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0229 18:58:35.975958   47515 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:58:35.975981   47515 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0229 18:58:35.975993   47515 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0229 18:58:35.976002   47515 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0229 18:58:35.976027   47515 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0229 18:58:35.975963   47515 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0229 18:58:35.975959   47515 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0229 18:58:35.976249   47515 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0229 18:58:36.111205   47515 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0229 18:58:36.124071   47515 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0229 18:58:36.150002   47515 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0229 18:58:36.196158   47515 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0229 18:58:36.258361   47515 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0229 18:58:36.273898   47515 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0229 18:58:36.283390   47515 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0229 18:58:36.336487   47515 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0229 18:58:36.336531   47515 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0229 18:58:36.336541   47515 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0229 18:58:36.336577   47515 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0229 18:58:36.336590   47515 ssh_runner.go:195] Run: which crictl
	I0229 18:58:36.336620   47515 ssh_runner.go:195] Run: which crictl
	I0229 18:58:36.336636   47515 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0229 18:58:36.336661   47515 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0229 18:58:36.336670   47515 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0229 18:58:36.336695   47515 ssh_runner.go:195] Run: which crictl
	I0229 18:58:36.336697   47515 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0229 18:58:36.336723   47515 ssh_runner.go:195] Run: which crictl
	I0229 18:58:36.383302   47515 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0229 18:58:36.383347   47515 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0229 18:58:36.383402   47515 ssh_runner.go:195] Run: which crictl
	I0229 18:58:36.393420   47515 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0229 18:58:36.393444   47515 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0229 18:58:36.393459   47515 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0229 18:58:36.393495   47515 ssh_runner.go:195] Run: which crictl
	I0229 18:58:36.393527   47515 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0229 18:58:36.393579   47515 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0229 18:58:36.393612   47515 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0229 18:58:36.393665   47515 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0229 18:58:36.503611   47515 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0229 18:58:36.503707   47515 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0229 18:58:36.508306   47515 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0229 18:58:36.508403   47515 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0229 18:58:36.511536   47515 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0229 18:58:36.511600   47515 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0229 18:58:36.511636   47515 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0229 18:58:36.511706   47515 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0229 18:58:36.511721   47515 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0229 18:58:36.511749   47515 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0229 18:58:36.511781   47515 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0229 18:58:36.522392   47515 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0229 18:58:36.522413   47515 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0229 18:58:36.522458   47515 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0229 18:58:36.522645   47515 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0229 18:58:36.523319   47515 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0229 18:58:36.529871   47515 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0229 18:58:36.576922   47515 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0229 18:58:36.576994   47515 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0229 18:58:36.577093   47515 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0229 18:58:36.892014   47515 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:58:34.530886   48088 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 18:58:34.547233   48088 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 18:58:34.572237   48088 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 18:58:34.586775   48088 system_pods.go:59] 8 kube-system pods found
	I0229 18:58:34.586816   48088 system_pods.go:61] "coredns-5dd5756b68-tr4nn" [016aff45-17c3-4278-a7f3-1e0a5770f1d4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 18:58:34.586827   48088 system_pods.go:61] "etcd-default-k8s-diff-port-153528" [829f38ad-e4e4-434d-8da6-dde64deeb1ec] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0229 18:58:34.586837   48088 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-153528" [e27986e6-58a2-4acc-8a41-d4662ce0848d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0229 18:58:34.586853   48088 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-153528" [fb77dff9-141e-495f-9be8-f570f9387bf3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0229 18:58:34.586868   48088 system_pods.go:61] "kube-proxy-fwqsv" [af8cd0ff-71dd-44d4-8918-496e27654cbf] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0229 18:58:34.586887   48088 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-153528" [a325ec8e-4613-4447-87b1-c23b5b614352] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0229 18:58:34.586898   48088 system_pods.go:61] "metrics-server-57f55c9bc5-226bj" [80d7a4c6-e9b5-4324-8c61-489a874a9e79] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 18:58:34.586910   48088 system_pods.go:61] "storage-provisioner" [4270d9ce-329f-4531-9563-65a398f8b168] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0229 18:58:34.586919   48088 system_pods.go:74] duration metric: took 14.657543ms to wait for pod list to return data ...
	I0229 18:58:34.586932   48088 node_conditions.go:102] verifying NodePressure condition ...
	I0229 18:58:34.595109   48088 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 18:58:34.595144   48088 node_conditions.go:123] node cpu capacity is 2
	I0229 18:58:34.595158   48088 node_conditions.go:105] duration metric: took 8.219984ms to run NodePressure ...
	I0229 18:58:34.595179   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:58:34.946493   48088 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0229 18:58:34.951066   48088 kubeadm.go:787] kubelet initialised
	I0229 18:58:34.951088   48088 kubeadm.go:788] duration metric: took 4.569338ms waiting for restarted kubelet to initialise ...
	I0229 18:58:34.951098   48088 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 18:58:34.956637   48088 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-tr4nn" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:36.967075   48088 pod_ready.go:102] pod "coredns-5dd5756b68-tr4nn" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:35.215815   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:35.715203   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:36.215521   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:36.715525   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:37.215610   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:37.715474   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:38.215208   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:38.714993   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:39.215128   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:39.715944   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:36.584041   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:38.584897   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:38.722817   47515 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.20033311s)
	I0229 18:58:38.722904   47515 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0229 18:58:38.722923   47515 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.830873001s)
	I0229 18:58:38.722981   47515 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0229 18:58:38.723016   47515 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:58:38.722938   47515 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0229 18:58:38.723083   47515 ssh_runner.go:195] Run: which crictl
	I0229 18:58:38.723104   47515 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0229 18:58:38.722872   47515 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0: (2.145756086s)
	I0229 18:58:38.723163   47515 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0229 18:58:38.728297   47515 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:58:42.131683   47515 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.403360461s)
	I0229 18:58:42.131729   47515 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0229 18:58:42.131819   47515 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (3.408694108s)
	I0229 18:58:42.131839   47515 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0229 18:58:42.131823   47515 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0229 18:58:42.131862   47515 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0229 18:58:42.131903   47515 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0229 18:58:39.465588   48088 pod_ready.go:102] pod "coredns-5dd5756b68-tr4nn" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:41.473698   48088 pod_ready.go:102] pod "coredns-5dd5756b68-tr4nn" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:42.965252   48088 pod_ready.go:92] pod "coredns-5dd5756b68-tr4nn" in "kube-system" namespace has status "Ready":"True"
	I0229 18:58:42.965281   48088 pod_ready.go:81] duration metric: took 8.008622438s waiting for pod "coredns-5dd5756b68-tr4nn" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:42.965293   48088 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-153528" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:42.977865   48088 pod_ready.go:92] pod "etcd-default-k8s-diff-port-153528" in "kube-system" namespace has status "Ready":"True"
	I0229 18:58:42.977888   48088 pod_ready.go:81] duration metric: took 12.586144ms waiting for pod "etcd-default-k8s-diff-port-153528" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:42.977900   48088 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-153528" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:43.486518   48088 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-153528" in "kube-system" namespace has status "Ready":"True"
	I0229 18:58:43.486545   48088 pod_ready.go:81] duration metric: took 508.631346ms waiting for pod "kube-apiserver-default-k8s-diff-port-153528" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:43.486554   48088 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-153528" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:40.215679   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:40.715898   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:41.215271   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:41.715702   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:42.214943   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:42.715085   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:43.215196   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:43.715164   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:44.215580   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:44.715155   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:41.082209   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:43.089104   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:45.101973   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:43.991872   47515 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.859995098s)
	I0229 18:58:43.991921   47515 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0229 18:58:43.992104   47515 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.860178579s)
	I0229 18:58:43.992159   47515 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0229 18:58:43.992190   47515 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0229 18:58:43.992238   47515 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0229 18:58:45.454368   47515 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.462102352s)
	I0229 18:58:45.454407   47515 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0229 18:58:45.454436   47515 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0229 18:58:45.454567   47515 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0229 18:58:45.493014   48088 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-153528" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:46.493937   48088 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-153528" in "kube-system" namespace has status "Ready":"True"
	I0229 18:58:46.493969   48088 pod_ready.go:81] duration metric: took 3.007406763s waiting for pod "kube-controller-manager-default-k8s-diff-port-153528" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:46.493982   48088 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-fwqsv" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:46.499157   48088 pod_ready.go:92] pod "kube-proxy-fwqsv" in "kube-system" namespace has status "Ready":"True"
	I0229 18:58:46.499177   48088 pod_ready.go:81] duration metric: took 5.187224ms waiting for pod "kube-proxy-fwqsv" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:46.499188   48088 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-153528" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:48.006573   48088 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-153528" in "kube-system" namespace has status "Ready":"True"
	I0229 18:58:48.006600   48088 pod_ready.go:81] duration metric: took 1.507402889s waiting for pod "kube-scheduler-default-k8s-diff-port-153528" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:48.006612   48088 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:45.215722   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:45.715879   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:46.215457   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:46.715123   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:47.216000   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:47.715056   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:48.215140   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:48.715448   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:49.215722   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:49.715058   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:47.586794   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:50.084118   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:48.118942   47515 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.664337971s)
	I0229 18:58:48.118983   47515 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0229 18:58:48.119010   47515 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0229 18:58:48.119086   47515 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0229 18:58:52.117429   47515 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.998319742s)
	I0229 18:58:52.117462   47515 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0229 18:58:52.117488   47515 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0229 18:58:52.117538   47515 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0229 18:58:50.015404   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:52.515203   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:50.214969   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:50.715535   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:51.215238   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:51.715704   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:52.215238   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:52.715897   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:53.215106   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:53.715753   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:54.215737   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:54.715449   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:52.084871   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:54.582435   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:53.079184   47515 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0229 18:58:53.079224   47515 cache_images.go:123] Successfully loaded all cached images
	I0229 18:58:53.079231   47515 cache_images.go:92] LoadImages completed in 17.104746432s
	I0229 18:58:53.079303   47515 ssh_runner.go:195] Run: crio config
	I0229 18:58:53.126378   47515 cni.go:84] Creating CNI manager for ""
	I0229 18:58:53.126400   47515 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 18:58:53.126417   47515 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 18:58:53.126434   47515 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.72 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-247197 NodeName:no-preload-247197 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.72"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.72 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 18:58:53.126583   47515 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.72
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-247197"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.72
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.72"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 18:58:53.126643   47515 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-247197 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.72
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-247197 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 18:58:53.126692   47515 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0229 18:58:53.141044   47515 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 18:58:53.141117   47515 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 18:58:53.153167   47515 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (381 bytes)
	I0229 18:58:53.173724   47515 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0229 18:58:53.192645   47515 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0229 18:58:53.212004   47515 ssh_runner.go:195] Run: grep 192.168.50.72	control-plane.minikube.internal$ /etc/hosts
	I0229 18:58:53.216443   47515 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.72	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:58:53.233319   47515 certs.go:56] Setting up /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/no-preload-247197 for IP: 192.168.50.72
	I0229 18:58:53.233353   47515 certs.go:190] acquiring lock for shared ca certs: {Name:mk3505d2468da66ac20dc3ebf913782cecf1ba0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:58:53.233510   47515 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18259-6428/.minikube/ca.key
	I0229 18:58:53.233568   47515 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.key
	I0229 18:58:53.233680   47515 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/no-preload-247197/client.key
	I0229 18:58:53.233763   47515 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/no-preload-247197/apiserver.key.7c8fc674
	I0229 18:58:53.233803   47515 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/no-preload-247197/proxy-client.key
	I0229 18:58:53.233915   47515 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651.pem (1338 bytes)
	W0229 18:58:53.233942   47515 certs.go:433] ignoring /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651_empty.pem, impossibly tiny 0 bytes
	I0229 18:58:53.233948   47515 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem (1675 bytes)
	I0229 18:58:53.233971   47515 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem (1082 bytes)
	I0229 18:58:53.233991   47515 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem (1123 bytes)
	I0229 18:58:53.234011   47515 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem (1675 bytes)
	I0229 18:58:53.234050   47515 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem (1708 bytes)
	I0229 18:58:53.234710   47515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/no-preload-247197/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 18:58:53.264093   47515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/no-preload-247197/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0229 18:58:53.290793   47515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/no-preload-247197/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 18:58:53.319206   47515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/no-preload-247197/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 18:58:53.346074   47515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 18:58:53.373754   47515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0229 18:58:53.402222   47515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 18:58:53.430685   47515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0229 18:58:53.458589   47515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651.pem --> /usr/share/ca-certificates/13651.pem (1338 bytes)
	I0229 18:58:53.485553   47515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem --> /usr/share/ca-certificates/136512.pem (1708 bytes)
	I0229 18:58:53.513594   47515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 18:58:53.542588   47515 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 18:58:53.562935   47515 ssh_runner.go:195] Run: openssl version
	I0229 18:58:53.571313   47515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13651.pem && ln -fs /usr/share/ca-certificates/13651.pem /etc/ssl/certs/13651.pem"
	I0229 18:58:53.586708   47515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13651.pem
	I0229 18:58:53.592585   47515 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 17:49 /usr/share/ca-certificates/13651.pem
	I0229 18:58:53.592682   47515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13651.pem
	I0229 18:58:53.600135   47515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13651.pem /etc/ssl/certs/51391683.0"
	I0229 18:58:53.614410   47515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136512.pem && ln -fs /usr/share/ca-certificates/136512.pem /etc/ssl/certs/136512.pem"
	I0229 18:58:53.627733   47515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136512.pem
	I0229 18:58:53.632869   47515 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 17:49 /usr/share/ca-certificates/136512.pem
	I0229 18:58:53.632926   47515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136512.pem
	I0229 18:58:53.639973   47515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136512.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 18:58:53.654090   47515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 18:58:53.667714   47515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:58:53.672987   47515 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 17:40 /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:58:53.673046   47515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:58:53.679806   47515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 18:58:53.692846   47515 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 18:58:53.697764   47515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0229 18:58:53.704678   47515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0229 18:58:53.711070   47515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0229 18:58:53.717607   47515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0229 18:58:53.724048   47515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0229 18:58:53.731138   47515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0229 18:58:53.737875   47515 kubeadm.go:404] StartCluster: {Name:no-preload-247197 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-247197 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.72 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:58:53.737981   47515 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0229 18:58:53.738028   47515 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 18:58:53.777952   47515 cri.go:89] found id: ""
	I0229 18:58:53.778016   47515 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 18:58:53.790323   47515 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0229 18:58:53.790342   47515 kubeadm.go:636] restartCluster start
	I0229 18:58:53.790397   47515 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0229 18:58:53.801812   47515 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:53.803203   47515 kubeconfig.go:92] found "no-preload-247197" server: "https://192.168.50.72:8443"
	I0229 18:58:53.806252   47515 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0229 18:58:53.817542   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:58:53.817601   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:53.831702   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:54.318196   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:58:54.318261   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:54.332586   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:54.818521   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:58:54.818617   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:54.835279   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:55.317681   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:58:55.317760   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:55.334156   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:55.818654   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:58:55.818761   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:55.834435   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:56.317800   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:58:56.317923   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:56.333149   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:56.817667   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:58:56.817776   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:56.832497   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:57.318058   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:58:57.318173   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:57.332672   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:57.818372   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:58:57.818477   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:57.834669   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:55.015453   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:57.513580   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:55.215634   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:55.715221   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:56.215582   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:56.715580   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:57.215652   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:57.715281   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:58.215656   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:58.715759   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:59.216000   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:59.714984   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:56.583205   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:59.083761   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:58.318525   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:58:58.318595   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:58.334704   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:58.818249   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:58:58.818360   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:58.834221   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:59.318385   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:58:59.318489   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:59.334283   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:59.818167   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:58:59.818231   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:59.834310   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:59:00.317793   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:59:00.317904   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:59:00.334063   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:59:00.817623   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:59:00.817702   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:59:00.832855   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:59:01.318481   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:59:01.318569   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:59:01.333716   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:59:01.818312   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:59:01.818413   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:59:01.834094   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:59:02.317571   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:59:02.317680   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:59:02.332422   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:59:02.817947   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:59:02.818044   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:59:02.834339   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:59.514446   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:02.015881   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:00.215747   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:00.715123   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:01.214978   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:01.715726   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:02.215092   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:02.715148   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:03.215149   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:03.715717   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:04.215830   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:04.715275   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:01.084277   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:03.583278   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:03.318317   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:59:03.318410   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:59:03.334824   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:59:03.818569   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:59:03.818652   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:59:03.834206   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:59:03.834235   47515 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0229 18:59:03.834244   47515 kubeadm.go:1135] stopping kube-system containers ...
	I0229 18:59:03.834255   47515 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0229 18:59:03.834306   47515 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 18:59:03.877464   47515 cri.go:89] found id: ""
	I0229 18:59:03.877543   47515 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0229 18:59:03.901093   47515 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 18:59:03.912185   47515 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 18:59:03.912237   47515 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 18:59:03.923685   47515 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0229 18:59:03.923706   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:59:04.037753   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:59:05.127681   47515 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.089896164s)
	I0229 18:59:05.127710   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:59:05.363326   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:59:05.447053   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:59:05.525183   47515 api_server.go:52] waiting for apiserver process to appear ...
	I0229 18:59:05.525276   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:06.026071   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:06.525747   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:07.026103   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:07.043681   47515 api_server.go:72] duration metric: took 1.518498943s to wait for apiserver process to appear ...
	I0229 18:59:07.043706   47515 api_server.go:88] waiting for apiserver healthz status ...
	I0229 18:59:07.043728   47515 api_server.go:253] Checking apiserver healthz at https://192.168.50.72:8443/healthz ...
	I0229 18:59:04.518296   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:07.014672   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:05.215563   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:05.715180   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:06.215014   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:06.715750   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:07.215911   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:07.715662   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:08.215895   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:08.715565   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:09.214999   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:09.215096   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:09.270645   47919 cri.go:89] found id: ""
	I0229 18:59:09.270672   47919 logs.go:276] 0 containers: []
	W0229 18:59:09.270683   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:09.270690   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:09.270748   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:09.335492   47919 cri.go:89] found id: ""
	I0229 18:59:09.335519   47919 logs.go:276] 0 containers: []
	W0229 18:59:09.335530   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:09.335546   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:09.335627   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:09.405117   47919 cri.go:89] found id: ""
	I0229 18:59:09.405150   47919 logs.go:276] 0 containers: []
	W0229 18:59:09.405160   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:09.405167   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:09.405233   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:09.451096   47919 cri.go:89] found id: ""
	I0229 18:59:09.451128   47919 logs.go:276] 0 containers: []
	W0229 18:59:09.451140   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:09.451147   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:09.451226   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:09.498951   47919 cri.go:89] found id: ""
	I0229 18:59:09.498981   47919 logs.go:276] 0 containers: []
	W0229 18:59:09.499007   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:09.499014   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:09.499091   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:09.544447   47919 cri.go:89] found id: ""
	I0229 18:59:09.544474   47919 logs.go:276] 0 containers: []
	W0229 18:59:09.544484   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:09.544491   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:09.544548   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:09.597735   47919 cri.go:89] found id: ""
	I0229 18:59:09.597764   47919 logs.go:276] 0 containers: []
	W0229 18:59:09.597775   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:09.597782   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:09.597836   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:09.648458   47919 cri.go:89] found id: ""
	I0229 18:59:09.648480   47919 logs.go:276] 0 containers: []
	W0229 18:59:09.648489   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:09.648499   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:09.648515   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:09.700744   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:09.700792   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:09.717303   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:09.717332   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:09.845966   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:09.845984   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:09.845995   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:09.913069   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:09.913106   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:05.583650   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:07.584155   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:09.584605   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:09.527960   47515 api_server.go:279] https://192.168.50.72:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 18:59:09.528037   47515 api_server.go:103] status: https://192.168.50.72:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 18:59:09.528059   47515 api_server.go:253] Checking apiserver healthz at https://192.168.50.72:8443/healthz ...
	I0229 18:59:09.571679   47515 api_server.go:279] https://192.168.50.72:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 18:59:09.571713   47515 api_server.go:103] status: https://192.168.50.72:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 18:59:09.571738   47515 api_server.go:253] Checking apiserver healthz at https://192.168.50.72:8443/healthz ...
	I0229 18:59:09.647733   47515 api_server.go:279] https://192.168.50.72:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 18:59:09.647780   47515 api_server.go:103] status: https://192.168.50.72:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 18:59:10.044646   47515 api_server.go:253] Checking apiserver healthz at https://192.168.50.72:8443/healthz ...
	I0229 18:59:10.049310   47515 api_server.go:279] https://192.168.50.72:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 18:59:10.049347   47515 api_server.go:103] status: https://192.168.50.72:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 18:59:10.543904   47515 api_server.go:253] Checking apiserver healthz at https://192.168.50.72:8443/healthz ...
	I0229 18:59:10.551014   47515 api_server.go:279] https://192.168.50.72:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 18:59:10.551055   47515 api_server.go:103] status: https://192.168.50.72:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 18:59:11.044658   47515 api_server.go:253] Checking apiserver healthz at https://192.168.50.72:8443/healthz ...
	I0229 18:59:11.051170   47515 api_server.go:279] https://192.168.50.72:8443/healthz returned 200:
	ok
	I0229 18:59:11.059048   47515 api_server.go:141] control plane version: v1.29.0-rc.2
	I0229 18:59:11.059076   47515 api_server.go:131] duration metric: took 4.015363545s to wait for apiserver health ...
	I0229 18:59:11.059085   47515 cni.go:84] Creating CNI manager for ""
	I0229 18:59:11.059092   47515 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 18:59:11.060915   47515 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 18:59:11.062158   47515 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 18:59:11.076961   47515 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 18:59:11.109344   47515 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 18:59:11.129625   47515 system_pods.go:59] 8 kube-system pods found
	I0229 18:59:11.129659   47515 system_pods.go:61] "coredns-76f75df574-dfrds" [ab7ce7e3-0532-48a1-9177-00e554d7e5af] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 18:59:11.129668   47515 system_pods.go:61] "etcd-no-preload-247197" [e37e6d4c-5039-484e-98af-553ade3ba60f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0229 18:59:11.129679   47515 system_pods.go:61] "kube-apiserver-no-preload-247197" [933648a9-115f-4c2a-b699-48ef7409331c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0229 18:59:11.129692   47515 system_pods.go:61] "kube-controller-manager-no-preload-247197" [b87a4a06-8a47-4cdf-a5e7-85f967e6332a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0229 18:59:11.129699   47515 system_pods.go:61] "kube-proxy-hjm9j" [a2e6ec66-78d9-4637-bb47-3f954969813b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0229 18:59:11.129707   47515 system_pods.go:61] "kube-scheduler-no-preload-247197" [cc52dc2c-cbe0-4cf0-8a2d-99a6f1880f6d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0229 18:59:11.129717   47515 system_pods.go:61] "metrics-server-57f55c9bc5-ggf8f" [dd2986a2-20a9-499c-805a-3c28819ff2f7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 18:59:11.129726   47515 system_pods.go:61] "storage-provisioner" [22f64d5e-b947-43ed-9842-cb6e252fd4a0] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0229 18:59:11.129733   47515 system_pods.go:74] duration metric: took 20.366108ms to wait for pod list to return data ...
	I0229 18:59:11.129742   47515 node_conditions.go:102] verifying NodePressure condition ...
	I0229 18:59:11.133259   47515 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 18:59:11.133282   47515 node_conditions.go:123] node cpu capacity is 2
	I0229 18:59:11.133294   47515 node_conditions.go:105] duration metric: took 3.545943ms to run NodePressure ...
	I0229 18:59:11.133313   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:59:11.618536   47515 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0229 18:59:11.625628   47515 kubeadm.go:787] kubelet initialised
	I0229 18:59:11.625649   47515 kubeadm.go:788] duration metric: took 7.089584ms waiting for restarted kubelet to initialise ...
	I0229 18:59:11.625661   47515 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 18:59:11.641122   47515 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-dfrds" in "kube-system" namespace to be "Ready" ...
	I0229 18:59:09.515059   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:11.515286   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:14.013214   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:12.465591   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:12.479774   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:12.479825   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:12.517591   47919 cri.go:89] found id: ""
	I0229 18:59:12.517620   47919 logs.go:276] 0 containers: []
	W0229 18:59:12.517630   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:12.517637   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:12.517693   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:12.560735   47919 cri.go:89] found id: ""
	I0229 18:59:12.560758   47919 logs.go:276] 0 containers: []
	W0229 18:59:12.560769   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:12.560776   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:12.560843   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:12.600002   47919 cri.go:89] found id: ""
	I0229 18:59:12.600025   47919 logs.go:276] 0 containers: []
	W0229 18:59:12.600033   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:12.600043   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:12.600088   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:12.639223   47919 cri.go:89] found id: ""
	I0229 18:59:12.639252   47919 logs.go:276] 0 containers: []
	W0229 18:59:12.639264   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:12.639272   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:12.639339   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:12.682491   47919 cri.go:89] found id: ""
	I0229 18:59:12.682514   47919 logs.go:276] 0 containers: []
	W0229 18:59:12.682524   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:12.682531   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:12.682590   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:12.720669   47919 cri.go:89] found id: ""
	I0229 18:59:12.720693   47919 logs.go:276] 0 containers: []
	W0229 18:59:12.720700   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:12.720706   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:12.720773   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:12.764880   47919 cri.go:89] found id: ""
	I0229 18:59:12.764908   47919 logs.go:276] 0 containers: []
	W0229 18:59:12.764919   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:12.764926   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:12.765011   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:12.808987   47919 cri.go:89] found id: ""
	I0229 18:59:12.809019   47919 logs.go:276] 0 containers: []
	W0229 18:59:12.809052   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:12.809064   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:12.809079   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:12.866228   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:12.866263   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:12.886698   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:12.886729   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:12.963092   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:12.963116   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:12.963129   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:13.034485   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:13.034524   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:11.586793   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:14.081742   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:13.648688   47515 pod_ready.go:102] pod "coredns-76f75df574-dfrds" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:15.648876   47515 pod_ready.go:102] pod "coredns-76f75df574-dfrds" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:17.649478   47515 pod_ready.go:102] pod "coredns-76f75df574-dfrds" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:16.015395   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:18.015919   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:15.588224   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:15.603293   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:15.603368   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:15.648746   47919 cri.go:89] found id: ""
	I0229 18:59:15.648771   47919 logs.go:276] 0 containers: []
	W0229 18:59:15.648781   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:15.648788   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:15.648850   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:15.686420   47919 cri.go:89] found id: ""
	I0229 18:59:15.686447   47919 logs.go:276] 0 containers: []
	W0229 18:59:15.686463   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:15.686470   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:15.686533   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:15.729410   47919 cri.go:89] found id: ""
	I0229 18:59:15.729439   47919 logs.go:276] 0 containers: []
	W0229 18:59:15.729451   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:15.729458   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:15.729526   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:15.768078   47919 cri.go:89] found id: ""
	I0229 18:59:15.768108   47919 logs.go:276] 0 containers: []
	W0229 18:59:15.768119   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:15.768127   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:15.768188   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:15.806725   47919 cri.go:89] found id: ""
	I0229 18:59:15.806753   47919 logs.go:276] 0 containers: []
	W0229 18:59:15.806765   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:15.806772   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:15.806845   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:15.848566   47919 cri.go:89] found id: ""
	I0229 18:59:15.848593   47919 logs.go:276] 0 containers: []
	W0229 18:59:15.848600   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:15.848606   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:15.848657   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:15.888907   47919 cri.go:89] found id: ""
	I0229 18:59:15.888930   47919 logs.go:276] 0 containers: []
	W0229 18:59:15.888942   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:15.888948   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:15.889009   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:15.926653   47919 cri.go:89] found id: ""
	I0229 18:59:15.926686   47919 logs.go:276] 0 containers: []
	W0229 18:59:15.926708   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:15.926729   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:15.926746   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:15.976773   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:15.976812   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:15.995440   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:15.995477   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:16.103753   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:16.103774   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:16.103786   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:16.188282   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:16.188319   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:18.733451   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:18.748528   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:18.748607   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:18.785998   47919 cri.go:89] found id: ""
	I0229 18:59:18.786055   47919 logs.go:276] 0 containers: []
	W0229 18:59:18.786069   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:18.786078   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:18.786144   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:18.824234   47919 cri.go:89] found id: ""
	I0229 18:59:18.824260   47919 logs.go:276] 0 containers: []
	W0229 18:59:18.824270   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:18.824277   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:18.824339   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:18.868586   47919 cri.go:89] found id: ""
	I0229 18:59:18.868615   47919 logs.go:276] 0 containers: []
	W0229 18:59:18.868626   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:18.868633   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:18.868696   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:18.912622   47919 cri.go:89] found id: ""
	I0229 18:59:18.912647   47919 logs.go:276] 0 containers: []
	W0229 18:59:18.912655   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:18.912661   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:18.912708   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:18.952001   47919 cri.go:89] found id: ""
	I0229 18:59:18.952029   47919 logs.go:276] 0 containers: []
	W0229 18:59:18.952040   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:18.952047   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:18.952108   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:18.993085   47919 cri.go:89] found id: ""
	I0229 18:59:18.993130   47919 logs.go:276] 0 containers: []
	W0229 18:59:18.993140   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:18.993148   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:18.993209   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:19.041498   47919 cri.go:89] found id: ""
	I0229 18:59:19.041524   47919 logs.go:276] 0 containers: []
	W0229 18:59:19.041536   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:19.041543   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:19.041601   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:19.099107   47919 cri.go:89] found id: ""
	I0229 18:59:19.099132   47919 logs.go:276] 0 containers: []
	W0229 18:59:19.099143   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:19.099153   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:19.099169   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:19.158578   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:19.158615   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:19.173561   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:19.173590   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:19.248498   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:19.248524   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:19.248540   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:19.326904   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:19.326939   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:16.085349   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:18.582796   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:20.148468   47515 pod_ready.go:102] pod "coredns-76f75df574-dfrds" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:21.648188   47515 pod_ready.go:92] pod "coredns-76f75df574-dfrds" in "kube-system" namespace has status "Ready":"True"
	I0229 18:59:21.648214   47515 pod_ready.go:81] duration metric: took 10.0070638s waiting for pod "coredns-76f75df574-dfrds" in "kube-system" namespace to be "Ready" ...
	I0229 18:59:21.648225   47515 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-247197" in "kube-system" namespace to be "Ready" ...
	I0229 18:59:20.514234   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:22.514669   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:21.877087   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:21.892919   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:21.892976   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:21.931119   47919 cri.go:89] found id: ""
	I0229 18:59:21.931147   47919 logs.go:276] 0 containers: []
	W0229 18:59:21.931159   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:21.931167   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:21.931227   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:21.971884   47919 cri.go:89] found id: ""
	I0229 18:59:21.971908   47919 logs.go:276] 0 containers: []
	W0229 18:59:21.971916   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:21.971921   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:21.971975   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:22.019170   47919 cri.go:89] found id: ""
	I0229 18:59:22.019206   47919 logs.go:276] 0 containers: []
	W0229 18:59:22.019216   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:22.019232   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:22.019311   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:22.078057   47919 cri.go:89] found id: ""
	I0229 18:59:22.078083   47919 logs.go:276] 0 containers: []
	W0229 18:59:22.078093   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:22.078100   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:22.078162   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:22.128112   47919 cri.go:89] found id: ""
	I0229 18:59:22.128141   47919 logs.go:276] 0 containers: []
	W0229 18:59:22.128151   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:22.128157   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:22.128214   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:22.171354   47919 cri.go:89] found id: ""
	I0229 18:59:22.171382   47919 logs.go:276] 0 containers: []
	W0229 18:59:22.171393   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:22.171400   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:22.171466   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:22.225620   47919 cri.go:89] found id: ""
	I0229 18:59:22.225642   47919 logs.go:276] 0 containers: []
	W0229 18:59:22.225651   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:22.225658   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:22.225718   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:22.271291   47919 cri.go:89] found id: ""
	I0229 18:59:22.271320   47919 logs.go:276] 0 containers: []
	W0229 18:59:22.271332   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:22.271343   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:22.271358   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:22.336735   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:22.336765   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:22.354397   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:22.354425   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:22.432691   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:22.432713   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:22.432727   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:22.520239   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:22.520268   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:20.587039   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:23.084979   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:25.086225   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:23.657675   47515 pod_ready.go:102] pod "etcd-no-preload-247197" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:25.656013   47515 pod_ready.go:92] pod "etcd-no-preload-247197" in "kube-system" namespace has status "Ready":"True"
	I0229 18:59:25.656050   47515 pod_ready.go:81] duration metric: took 4.007810687s waiting for pod "etcd-no-preload-247197" in "kube-system" namespace to be "Ready" ...
	I0229 18:59:25.656064   47515 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-247197" in "kube-system" namespace to be "Ready" ...
	I0229 18:59:25.661235   47515 pod_ready.go:92] pod "kube-apiserver-no-preload-247197" in "kube-system" namespace has status "Ready":"True"
	I0229 18:59:25.661263   47515 pod_ready.go:81] duration metric: took 5.191999ms waiting for pod "kube-apiserver-no-preload-247197" in "kube-system" namespace to be "Ready" ...
	I0229 18:59:25.661273   47515 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-247197" in "kube-system" namespace to be "Ready" ...
	I0229 18:59:25.666649   47515 pod_ready.go:92] pod "kube-controller-manager-no-preload-247197" in "kube-system" namespace has status "Ready":"True"
	I0229 18:59:25.666672   47515 pod_ready.go:81] duration metric: took 5.388774ms waiting for pod "kube-controller-manager-no-preload-247197" in "kube-system" namespace to be "Ready" ...
	I0229 18:59:25.666680   47515 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-hjm9j" in "kube-system" namespace to be "Ready" ...
	I0229 18:59:25.672042   47515 pod_ready.go:92] pod "kube-proxy-hjm9j" in "kube-system" namespace has status "Ready":"True"
	I0229 18:59:25.672067   47515 pod_ready.go:81] duration metric: took 5.380771ms waiting for pod "kube-proxy-hjm9j" in "kube-system" namespace to be "Ready" ...
	I0229 18:59:25.672076   47515 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-247197" in "kube-system" namespace to be "Ready" ...
	I0229 18:59:25.676980   47515 pod_ready.go:92] pod "kube-scheduler-no-preload-247197" in "kube-system" namespace has status "Ready":"True"
	I0229 18:59:25.677001   47515 pod_ready.go:81] duration metric: took 4.919332ms waiting for pod "kube-scheduler-no-preload-247197" in "kube-system" namespace to be "Ready" ...
	I0229 18:59:25.677013   47515 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace to be "Ready" ...
	I0229 18:59:27.684865   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:25.017772   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:27.513975   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:25.073478   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:25.105197   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:25.105262   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:25.165700   47919 cri.go:89] found id: ""
	I0229 18:59:25.165728   47919 logs.go:276] 0 containers: []
	W0229 18:59:25.165737   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:25.165744   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:25.165810   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:25.210864   47919 cri.go:89] found id: ""
	I0229 18:59:25.210892   47919 logs.go:276] 0 containers: []
	W0229 18:59:25.210904   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:25.210911   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:25.210974   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:25.257785   47919 cri.go:89] found id: ""
	I0229 18:59:25.257810   47919 logs.go:276] 0 containers: []
	W0229 18:59:25.257820   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:25.257827   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:25.257888   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:25.299816   47919 cri.go:89] found id: ""
	I0229 18:59:25.299844   47919 logs.go:276] 0 containers: []
	W0229 18:59:25.299855   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:25.299863   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:25.299933   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:25.339711   47919 cri.go:89] found id: ""
	I0229 18:59:25.339737   47919 logs.go:276] 0 containers: []
	W0229 18:59:25.339746   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:25.339751   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:25.339805   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:25.381107   47919 cri.go:89] found id: ""
	I0229 18:59:25.381135   47919 logs.go:276] 0 containers: []
	W0229 18:59:25.381145   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:25.381153   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:25.381211   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:25.429029   47919 cri.go:89] found id: ""
	I0229 18:59:25.429054   47919 logs.go:276] 0 containers: []
	W0229 18:59:25.429064   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:25.429071   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:25.429130   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:25.470598   47919 cri.go:89] found id: ""
	I0229 18:59:25.470629   47919 logs.go:276] 0 containers: []
	W0229 18:59:25.470637   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:25.470644   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:25.470655   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:25.516439   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:25.516476   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:25.569170   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:25.569204   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:25.584405   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:25.584430   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:25.663650   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:25.663671   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:25.663686   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:28.248036   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:28.263367   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:28.263440   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:28.302232   47919 cri.go:89] found id: ""
	I0229 18:59:28.302259   47919 logs.go:276] 0 containers: []
	W0229 18:59:28.302273   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:28.302281   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:28.302340   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:28.345147   47919 cri.go:89] found id: ""
	I0229 18:59:28.345173   47919 logs.go:276] 0 containers: []
	W0229 18:59:28.345185   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:28.345192   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:28.345250   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:28.383671   47919 cri.go:89] found id: ""
	I0229 18:59:28.383690   47919 logs.go:276] 0 containers: []
	W0229 18:59:28.383702   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:28.383709   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:28.383762   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:28.423737   47919 cri.go:89] found id: ""
	I0229 18:59:28.423762   47919 logs.go:276] 0 containers: []
	W0229 18:59:28.423769   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:28.423774   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:28.423826   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:28.465679   47919 cri.go:89] found id: ""
	I0229 18:59:28.465705   47919 logs.go:276] 0 containers: []
	W0229 18:59:28.465715   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:28.465723   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:28.465775   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:28.509703   47919 cri.go:89] found id: ""
	I0229 18:59:28.509731   47919 logs.go:276] 0 containers: []
	W0229 18:59:28.509742   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:28.509754   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:28.509826   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:28.549981   47919 cri.go:89] found id: ""
	I0229 18:59:28.550010   47919 logs.go:276] 0 containers: []
	W0229 18:59:28.550021   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:28.550027   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:28.550093   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:28.589802   47919 cri.go:89] found id: ""
	I0229 18:59:28.589827   47919 logs.go:276] 0 containers: []
	W0229 18:59:28.589834   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:28.589841   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:28.589853   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:28.670623   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:28.670644   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:28.670655   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:28.765451   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:28.765484   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:28.821538   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:28.821571   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:28.889401   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:28.889438   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:27.583470   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:29.584344   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:30.184242   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:32.184867   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:29.514804   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:31.516473   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:34.013518   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:31.406911   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:31.422464   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:31.422541   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:31.460701   47919 cri.go:89] found id: ""
	I0229 18:59:31.460744   47919 logs.go:276] 0 containers: []
	W0229 18:59:31.460755   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:31.460762   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:31.460822   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:31.506966   47919 cri.go:89] found id: ""
	I0229 18:59:31.506996   47919 logs.go:276] 0 containers: []
	W0229 18:59:31.507007   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:31.507013   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:31.507088   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:31.542582   47919 cri.go:89] found id: ""
	I0229 18:59:31.542611   47919 logs.go:276] 0 containers: []
	W0229 18:59:31.542623   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:31.542631   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:31.542693   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:31.585470   47919 cri.go:89] found id: ""
	I0229 18:59:31.585496   47919 logs.go:276] 0 containers: []
	W0229 18:59:31.585508   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:31.585516   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:31.585574   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:31.627751   47919 cri.go:89] found id: ""
	I0229 18:59:31.627785   47919 logs.go:276] 0 containers: []
	W0229 18:59:31.627797   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:31.627805   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:31.627864   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:31.665988   47919 cri.go:89] found id: ""
	I0229 18:59:31.666009   47919 logs.go:276] 0 containers: []
	W0229 18:59:31.666017   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:31.666023   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:31.666081   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:31.712553   47919 cri.go:89] found id: ""
	I0229 18:59:31.712583   47919 logs.go:276] 0 containers: []
	W0229 18:59:31.712597   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:31.712603   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:31.712659   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:31.749904   47919 cri.go:89] found id: ""
	I0229 18:59:31.749944   47919 logs.go:276] 0 containers: []
	W0229 18:59:31.749954   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:31.749965   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:31.749980   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:31.843949   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:31.843992   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:31.898158   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:31.898186   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:31.949798   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:31.949831   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:31.965666   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:31.965697   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:32.040368   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:34.541417   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:34.558286   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:34.558345   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:34.602083   47919 cri.go:89] found id: ""
	I0229 18:59:34.602113   47919 logs.go:276] 0 containers: []
	W0229 18:59:34.602123   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:34.602130   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:34.602200   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:34.647108   47919 cri.go:89] found id: ""
	I0229 18:59:34.647136   47919 logs.go:276] 0 containers: []
	W0229 18:59:34.647146   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:34.647151   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:34.647220   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:34.692920   47919 cri.go:89] found id: ""
	I0229 18:59:34.692942   47919 logs.go:276] 0 containers: []
	W0229 18:59:34.692950   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:34.692956   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:34.693000   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:34.739367   47919 cri.go:89] found id: ""
	I0229 18:59:34.739397   47919 logs.go:276] 0 containers: []
	W0229 18:59:34.739408   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:34.739416   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:34.739478   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:34.794083   47919 cri.go:89] found id: ""
	I0229 18:59:34.794106   47919 logs.go:276] 0 containers: []
	W0229 18:59:34.794114   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:34.794120   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:34.794179   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:34.865371   47919 cri.go:89] found id: ""
	I0229 18:59:34.865400   47919 logs.go:276] 0 containers: []
	W0229 18:59:34.865412   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:34.865419   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:34.865476   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:34.906957   47919 cri.go:89] found id: ""
	I0229 18:59:34.906986   47919 logs.go:276] 0 containers: []
	W0229 18:59:34.906994   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:34.906999   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:34.907063   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:31.584743   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:34.085375   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:34.684397   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:37.183641   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:36.015759   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:38.514451   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:34.948548   47919 cri.go:89] found id: ""
	I0229 18:59:34.948570   47919 logs.go:276] 0 containers: []
	W0229 18:59:34.948577   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:34.948586   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:34.948598   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:35.036558   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:35.036594   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:35.080137   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:35.080169   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:35.130408   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:35.130436   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:35.148306   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:35.148332   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:35.222648   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:37.723158   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:37.741809   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:37.741885   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:37.787147   47919 cri.go:89] found id: ""
	I0229 18:59:37.787177   47919 logs.go:276] 0 containers: []
	W0229 18:59:37.787184   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:37.787192   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:37.787249   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:37.835589   47919 cri.go:89] found id: ""
	I0229 18:59:37.835613   47919 logs.go:276] 0 containers: []
	W0229 18:59:37.835623   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:37.835630   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:37.835687   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:37.895088   47919 cri.go:89] found id: ""
	I0229 18:59:37.895118   47919 logs.go:276] 0 containers: []
	W0229 18:59:37.895130   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:37.895137   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:37.895194   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:37.940837   47919 cri.go:89] found id: ""
	I0229 18:59:37.940867   47919 logs.go:276] 0 containers: []
	W0229 18:59:37.940878   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:37.940886   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:37.940946   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:37.989155   47919 cri.go:89] found id: ""
	I0229 18:59:37.989183   47919 logs.go:276] 0 containers: []
	W0229 18:59:37.989194   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:37.989203   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:37.989267   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:38.026517   47919 cri.go:89] found id: ""
	I0229 18:59:38.026543   47919 logs.go:276] 0 containers: []
	W0229 18:59:38.026553   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:38.026560   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:38.026623   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:38.063299   47919 cri.go:89] found id: ""
	I0229 18:59:38.063328   47919 logs.go:276] 0 containers: []
	W0229 18:59:38.063340   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:38.063347   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:38.063393   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:38.106278   47919 cri.go:89] found id: ""
	I0229 18:59:38.106298   47919 logs.go:276] 0 containers: []
	W0229 18:59:38.106305   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:38.106315   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:38.106330   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:38.182985   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:38.183008   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:38.183038   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:38.260280   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:38.260312   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:38.303648   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:38.303678   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:38.352889   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:38.352931   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:36.583258   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:38.583878   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:39.185221   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:41.684957   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:40.515303   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:43.017529   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:40.870416   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:40.885618   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:40.885692   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:40.924088   47919 cri.go:89] found id: ""
	I0229 18:59:40.924115   47919 logs.go:276] 0 containers: []
	W0229 18:59:40.924126   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:40.924133   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:40.924192   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:40.959485   47919 cri.go:89] found id: ""
	I0229 18:59:40.959513   47919 logs.go:276] 0 containers: []
	W0229 18:59:40.959524   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:40.959532   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:40.959593   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:41.009453   47919 cri.go:89] found id: ""
	I0229 18:59:41.009478   47919 logs.go:276] 0 containers: []
	W0229 18:59:41.009489   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:41.009496   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:41.009552   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:41.052894   47919 cri.go:89] found id: ""
	I0229 18:59:41.052922   47919 logs.go:276] 0 containers: []
	W0229 18:59:41.052933   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:41.052940   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:41.052997   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:41.098299   47919 cri.go:89] found id: ""
	I0229 18:59:41.098328   47919 logs.go:276] 0 containers: []
	W0229 18:59:41.098338   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:41.098345   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:41.098460   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:41.138287   47919 cri.go:89] found id: ""
	I0229 18:59:41.138313   47919 logs.go:276] 0 containers: []
	W0229 18:59:41.138324   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:41.138333   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:41.138395   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:41.176482   47919 cri.go:89] found id: ""
	I0229 18:59:41.176512   47919 logs.go:276] 0 containers: []
	W0229 18:59:41.176522   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:41.176529   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:41.176598   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:41.215284   47919 cri.go:89] found id: ""
	I0229 18:59:41.215307   47919 logs.go:276] 0 containers: []
	W0229 18:59:41.215317   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:41.215327   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:41.215342   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:41.230954   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:41.230982   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:41.313672   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:41.313696   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:41.313713   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:41.393574   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:41.393610   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:41.443384   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:41.443422   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:43.994323   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:44.008821   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:44.008892   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:44.050088   47919 cri.go:89] found id: ""
	I0229 18:59:44.050116   47919 logs.go:276] 0 containers: []
	W0229 18:59:44.050124   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:44.050130   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:44.050207   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:44.089721   47919 cri.go:89] found id: ""
	I0229 18:59:44.089749   47919 logs.go:276] 0 containers: []
	W0229 18:59:44.089756   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:44.089762   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:44.089818   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:44.132366   47919 cri.go:89] found id: ""
	I0229 18:59:44.132398   47919 logs.go:276] 0 containers: []
	W0229 18:59:44.132407   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:44.132412   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:44.132468   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:44.173568   47919 cri.go:89] found id: ""
	I0229 18:59:44.173591   47919 logs.go:276] 0 containers: []
	W0229 18:59:44.173598   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:44.173604   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:44.173661   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:44.214660   47919 cri.go:89] found id: ""
	I0229 18:59:44.214683   47919 logs.go:276] 0 containers: []
	W0229 18:59:44.214691   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:44.214696   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:44.214747   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:44.254355   47919 cri.go:89] found id: ""
	I0229 18:59:44.254386   47919 logs.go:276] 0 containers: []
	W0229 18:59:44.254397   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:44.254405   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:44.254464   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:44.293548   47919 cri.go:89] found id: ""
	I0229 18:59:44.293573   47919 logs.go:276] 0 containers: []
	W0229 18:59:44.293584   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:44.293591   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:44.293652   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:44.333335   47919 cri.go:89] found id: ""
	I0229 18:59:44.333361   47919 logs.go:276] 0 containers: []
	W0229 18:59:44.333372   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:44.333383   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:44.333398   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:44.348941   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:44.348973   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:44.419949   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:44.419968   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:44.419982   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:44.503445   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:44.503479   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:44.558694   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:44.558728   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:40.584127   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:43.084271   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:43.685573   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:46.184467   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:45.513896   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:47.514467   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:47.129362   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:47.145410   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:47.145483   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:47.194037   47919 cri.go:89] found id: ""
	I0229 18:59:47.194073   47919 logs.go:276] 0 containers: []
	W0229 18:59:47.194092   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:47.194100   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:47.194160   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:47.232500   47919 cri.go:89] found id: ""
	I0229 18:59:47.232528   47919 logs.go:276] 0 containers: []
	W0229 18:59:47.232559   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:47.232568   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:47.232634   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:47.271452   47919 cri.go:89] found id: ""
	I0229 18:59:47.271485   47919 logs.go:276] 0 containers: []
	W0229 18:59:47.271494   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:47.271501   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:47.271561   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:47.313482   47919 cri.go:89] found id: ""
	I0229 18:59:47.313509   47919 logs.go:276] 0 containers: []
	W0229 18:59:47.313520   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:47.313527   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:47.313590   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:47.354958   47919 cri.go:89] found id: ""
	I0229 18:59:47.354988   47919 logs.go:276] 0 containers: []
	W0229 18:59:47.354996   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:47.355001   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:47.355092   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:47.393312   47919 cri.go:89] found id: ""
	I0229 18:59:47.393338   47919 logs.go:276] 0 containers: []
	W0229 18:59:47.393349   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:47.393356   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:47.393415   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:47.431370   47919 cri.go:89] found id: ""
	I0229 18:59:47.431396   47919 logs.go:276] 0 containers: []
	W0229 18:59:47.431406   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:47.431413   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:47.431471   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:47.471659   47919 cri.go:89] found id: ""
	I0229 18:59:47.471683   47919 logs.go:276] 0 containers: []
	W0229 18:59:47.471692   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:47.471702   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:47.471715   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:47.530365   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:47.530405   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:47.558874   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:47.558903   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:47.644009   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:47.644033   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:47.644047   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:47.730063   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:47.730095   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:45.583524   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:47.585620   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:50.083189   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:48.684211   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:50.686885   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:49.514667   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:52.014092   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:50.272945   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:50.288718   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:50.288796   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:50.331460   47919 cri.go:89] found id: ""
	I0229 18:59:50.331482   47919 logs.go:276] 0 containers: []
	W0229 18:59:50.331489   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:50.331495   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:50.331543   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:50.374960   47919 cri.go:89] found id: ""
	I0229 18:59:50.374989   47919 logs.go:276] 0 containers: []
	W0229 18:59:50.375000   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:50.375006   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:50.375076   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:50.415073   47919 cri.go:89] found id: ""
	I0229 18:59:50.415095   47919 logs.go:276] 0 containers: []
	W0229 18:59:50.415102   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:50.415107   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:50.415157   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:50.452511   47919 cri.go:89] found id: ""
	I0229 18:59:50.452554   47919 logs.go:276] 0 containers: []
	W0229 18:59:50.452563   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:50.452568   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:50.452612   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:50.498103   47919 cri.go:89] found id: ""
	I0229 18:59:50.498125   47919 logs.go:276] 0 containers: []
	W0229 18:59:50.498132   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:50.498137   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:50.498193   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:50.545366   47919 cri.go:89] found id: ""
	I0229 18:59:50.545397   47919 logs.go:276] 0 containers: []
	W0229 18:59:50.545409   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:50.545417   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:50.545487   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:50.608215   47919 cri.go:89] found id: ""
	I0229 18:59:50.608239   47919 logs.go:276] 0 containers: []
	W0229 18:59:50.608250   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:50.608257   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:50.608314   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:50.660835   47919 cri.go:89] found id: ""
	I0229 18:59:50.660861   47919 logs.go:276] 0 containers: []
	W0229 18:59:50.660881   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:50.660892   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:50.660907   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:50.749671   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:50.749712   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:50.797567   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:50.797595   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:50.848022   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:50.848059   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:50.862797   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:50.862820   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:50.934682   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:53.435804   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:53.451364   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:53.451440   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:53.500680   47919 cri.go:89] found id: ""
	I0229 18:59:53.500706   47919 logs.go:276] 0 containers: []
	W0229 18:59:53.500717   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:53.500744   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:53.500797   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:53.565306   47919 cri.go:89] found id: ""
	I0229 18:59:53.565334   47919 logs.go:276] 0 containers: []
	W0229 18:59:53.565344   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:53.565351   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:53.565410   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:53.631438   47919 cri.go:89] found id: ""
	I0229 18:59:53.631461   47919 logs.go:276] 0 containers: []
	W0229 18:59:53.631479   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:53.631486   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:53.631554   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:53.679482   47919 cri.go:89] found id: ""
	I0229 18:59:53.679506   47919 logs.go:276] 0 containers: []
	W0229 18:59:53.679516   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:53.679524   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:53.679580   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:53.722098   47919 cri.go:89] found id: ""
	I0229 18:59:53.722125   47919 logs.go:276] 0 containers: []
	W0229 18:59:53.722135   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:53.722142   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:53.722211   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:53.761804   47919 cri.go:89] found id: ""
	I0229 18:59:53.761838   47919 logs.go:276] 0 containers: []
	W0229 18:59:53.761849   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:53.761858   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:53.761942   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:53.806109   47919 cri.go:89] found id: ""
	I0229 18:59:53.806137   47919 logs.go:276] 0 containers: []
	W0229 18:59:53.806149   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:53.806157   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:53.806219   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:53.856794   47919 cri.go:89] found id: ""
	I0229 18:59:53.856823   47919 logs.go:276] 0 containers: []
	W0229 18:59:53.856831   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:53.856839   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:53.856849   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:53.908216   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:53.908252   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:53.923999   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:53.924038   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:54.000750   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:54.000772   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:54.000783   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:54.086840   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:54.086870   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:52.083751   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:54.586556   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:53.184426   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:55.683893   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:57.685129   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:54.513193   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:56.515925   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:59.013745   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:56.630728   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:56.647368   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:56.647440   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:56.693706   47919 cri.go:89] found id: ""
	I0229 18:59:56.693726   47919 logs.go:276] 0 containers: []
	W0229 18:59:56.693733   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:56.693738   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:56.693780   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:56.733377   47919 cri.go:89] found id: ""
	I0229 18:59:56.733404   47919 logs.go:276] 0 containers: []
	W0229 18:59:56.733415   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:56.733423   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:56.733491   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:56.772186   47919 cri.go:89] found id: ""
	I0229 18:59:56.772209   47919 logs.go:276] 0 containers: []
	W0229 18:59:56.772216   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:56.772222   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:56.772267   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:56.811919   47919 cri.go:89] found id: ""
	I0229 18:59:56.811964   47919 logs.go:276] 0 containers: []
	W0229 18:59:56.811977   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:56.811984   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:56.812035   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:56.849345   47919 cri.go:89] found id: ""
	I0229 18:59:56.849372   47919 logs.go:276] 0 containers: []
	W0229 18:59:56.849383   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:56.849390   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:56.849447   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:56.900091   47919 cri.go:89] found id: ""
	I0229 18:59:56.900119   47919 logs.go:276] 0 containers: []
	W0229 18:59:56.900129   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:56.900136   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:56.900193   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:56.937662   47919 cri.go:89] found id: ""
	I0229 18:59:56.937692   47919 logs.go:276] 0 containers: []
	W0229 18:59:56.937703   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:56.937710   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:56.937772   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:56.978195   47919 cri.go:89] found id: ""
	I0229 18:59:56.978224   47919 logs.go:276] 0 containers: []
	W0229 18:59:56.978234   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:56.978244   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:56.978259   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:57.059190   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:57.059223   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:57.101416   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:57.101442   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:57.156102   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:57.156140   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:57.171401   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:57.171435   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:57.243717   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:59.744588   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:59.760099   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:59.760175   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:59.798722   47919 cri.go:89] found id: ""
	I0229 18:59:59.798751   47919 logs.go:276] 0 containers: []
	W0229 18:59:59.798762   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:59.798770   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:59.798830   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:59.842423   47919 cri.go:89] found id: ""
	I0229 18:59:59.842452   47919 logs.go:276] 0 containers: []
	W0229 18:59:59.842463   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:59.842470   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:59.842532   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:59.883742   47919 cri.go:89] found id: ""
	I0229 18:59:59.883768   47919 logs.go:276] 0 containers: []
	W0229 18:59:59.883775   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:59.883781   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:59.883826   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:59.924062   47919 cri.go:89] found id: ""
	I0229 18:59:59.924091   47919 logs.go:276] 0 containers: []
	W0229 18:59:59.924102   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:59.924109   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:59.924166   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:56.587621   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:59.087882   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:59.685911   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:02.185406   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:01.014202   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:03.014972   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:59.962465   47919 cri.go:89] found id: ""
	I0229 18:59:59.962497   47919 logs.go:276] 0 containers: []
	W0229 18:59:59.962508   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:59.962515   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:59.962576   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:00.006069   47919 cri.go:89] found id: ""
	I0229 19:00:00.006103   47919 logs.go:276] 0 containers: []
	W0229 19:00:00.006114   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:00.006123   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:00.006185   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:00.047671   47919 cri.go:89] found id: ""
	I0229 19:00:00.047697   47919 logs.go:276] 0 containers: []
	W0229 19:00:00.047709   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:00.047715   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:00.047773   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:00.091452   47919 cri.go:89] found id: ""
	I0229 19:00:00.091475   47919 logs.go:276] 0 containers: []
	W0229 19:00:00.091486   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:00.091497   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:00.091511   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:00.143282   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:00.143313   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:00.158342   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:00.158366   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:00.239745   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:00.239774   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:00.239792   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:00.339048   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:00.339083   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:02.898414   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:02.914154   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:02.914221   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:02.956122   47919 cri.go:89] found id: ""
	I0229 19:00:02.956151   47919 logs.go:276] 0 containers: []
	W0229 19:00:02.956211   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:02.956225   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:02.956272   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:02.993609   47919 cri.go:89] found id: ""
	I0229 19:00:02.993636   47919 logs.go:276] 0 containers: []
	W0229 19:00:02.993646   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:02.993659   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:02.993720   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:03.038131   47919 cri.go:89] found id: ""
	I0229 19:00:03.038152   47919 logs.go:276] 0 containers: []
	W0229 19:00:03.038160   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:03.038165   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:03.038217   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:03.090845   47919 cri.go:89] found id: ""
	I0229 19:00:03.090866   47919 logs.go:276] 0 containers: []
	W0229 19:00:03.090873   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:03.090878   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:03.090935   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:03.129520   47919 cri.go:89] found id: ""
	I0229 19:00:03.129549   47919 logs.go:276] 0 containers: []
	W0229 19:00:03.129561   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:03.129568   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:03.129620   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:03.178528   47919 cri.go:89] found id: ""
	I0229 19:00:03.178557   47919 logs.go:276] 0 containers: []
	W0229 19:00:03.178567   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:03.178575   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:03.178631   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:03.218337   47919 cri.go:89] found id: ""
	I0229 19:00:03.218357   47919 logs.go:276] 0 containers: []
	W0229 19:00:03.218364   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:03.218369   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:03.218417   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:03.267682   47919 cri.go:89] found id: ""
	I0229 19:00:03.267713   47919 logs.go:276] 0 containers: []
	W0229 19:00:03.267726   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:03.267735   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:03.267753   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:03.286961   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:03.286987   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:03.376514   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:03.376535   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:03.376546   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:03.459824   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:03.459872   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:03.505821   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:03.505848   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:01.582954   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:03.583198   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:04.684892   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:06.685508   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:05.015836   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:07.514376   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:06.062525   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:06.077637   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:06.077708   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:06.119344   47919 cri.go:89] found id: ""
	I0229 19:00:06.119368   47919 logs.go:276] 0 containers: []
	W0229 19:00:06.119376   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:06.119381   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:06.119430   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:06.158209   47919 cri.go:89] found id: ""
	I0229 19:00:06.158232   47919 logs.go:276] 0 containers: []
	W0229 19:00:06.158239   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:06.158245   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:06.158291   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:06.198521   47919 cri.go:89] found id: ""
	I0229 19:00:06.198545   47919 logs.go:276] 0 containers: []
	W0229 19:00:06.198553   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:06.198559   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:06.198609   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:06.235872   47919 cri.go:89] found id: ""
	I0229 19:00:06.235919   47919 logs.go:276] 0 containers: []
	W0229 19:00:06.235930   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:06.235937   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:06.235998   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:06.282814   47919 cri.go:89] found id: ""
	I0229 19:00:06.282841   47919 logs.go:276] 0 containers: []
	W0229 19:00:06.282853   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:06.282860   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:06.282928   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:06.330549   47919 cri.go:89] found id: ""
	I0229 19:00:06.330572   47919 logs.go:276] 0 containers: []
	W0229 19:00:06.330580   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:06.330585   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:06.330632   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:06.399968   47919 cri.go:89] found id: ""
	I0229 19:00:06.399996   47919 logs.go:276] 0 containers: []
	W0229 19:00:06.400006   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:06.400012   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:06.400062   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:06.444899   47919 cri.go:89] found id: ""
	I0229 19:00:06.444921   47919 logs.go:276] 0 containers: []
	W0229 19:00:06.444929   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:06.444937   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:06.444950   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:06.460552   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:06.460580   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:06.532932   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:06.532956   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:06.532969   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:06.615130   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:06.615170   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:06.664499   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:06.664532   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:09.219226   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:09.236769   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:09.236829   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:09.292309   47919 cri.go:89] found id: ""
	I0229 19:00:09.292331   47919 logs.go:276] 0 containers: []
	W0229 19:00:09.292339   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:09.292345   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:09.292392   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:09.355237   47919 cri.go:89] found id: ""
	I0229 19:00:09.355259   47919 logs.go:276] 0 containers: []
	W0229 19:00:09.355267   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:09.355272   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:09.355319   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:09.397950   47919 cri.go:89] found id: ""
	I0229 19:00:09.397977   47919 logs.go:276] 0 containers: []
	W0229 19:00:09.397987   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:09.397995   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:09.398057   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:09.436751   47919 cri.go:89] found id: ""
	I0229 19:00:09.436779   47919 logs.go:276] 0 containers: []
	W0229 19:00:09.436789   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:09.436797   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:09.436862   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:09.480288   47919 cri.go:89] found id: ""
	I0229 19:00:09.480311   47919 logs.go:276] 0 containers: []
	W0229 19:00:09.480318   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:09.480324   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:09.480375   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:09.523576   47919 cri.go:89] found id: ""
	I0229 19:00:09.523599   47919 logs.go:276] 0 containers: []
	W0229 19:00:09.523606   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:09.523611   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:09.523658   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:09.562818   47919 cri.go:89] found id: ""
	I0229 19:00:09.562848   47919 logs.go:276] 0 containers: []
	W0229 19:00:09.562859   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:09.562872   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:09.562919   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:09.603331   47919 cri.go:89] found id: ""
	I0229 19:00:09.603357   47919 logs.go:276] 0 containers: []
	W0229 19:00:09.603369   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:09.603379   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:09.603393   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:09.652060   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:09.652089   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:09.668372   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:09.668394   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:09.745897   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:09.745923   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:09.745937   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:09.826981   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:09.827014   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:05.590288   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:08.083411   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:10.084324   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:09.184577   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:11.185922   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:10.015288   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:12.513820   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:12.371447   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:12.385523   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:12.385613   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:12.422038   47919 cri.go:89] found id: ""
	I0229 19:00:12.422067   47919 logs.go:276] 0 containers: []
	W0229 19:00:12.422077   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:12.422084   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:12.422155   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:12.460443   47919 cri.go:89] found id: ""
	I0229 19:00:12.460470   47919 logs.go:276] 0 containers: []
	W0229 19:00:12.460487   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:12.460495   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:12.460551   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:12.502791   47919 cri.go:89] found id: ""
	I0229 19:00:12.502820   47919 logs.go:276] 0 containers: []
	W0229 19:00:12.502830   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:12.502838   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:12.502897   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:12.540738   47919 cri.go:89] found id: ""
	I0229 19:00:12.540769   47919 logs.go:276] 0 containers: []
	W0229 19:00:12.540780   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:12.540786   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:12.540845   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:12.580041   47919 cri.go:89] found id: ""
	I0229 19:00:12.580072   47919 logs.go:276] 0 containers: []
	W0229 19:00:12.580084   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:12.580091   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:12.580151   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:12.620721   47919 cri.go:89] found id: ""
	I0229 19:00:12.620750   47919 logs.go:276] 0 containers: []
	W0229 19:00:12.620758   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:12.620763   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:12.620820   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:12.659877   47919 cri.go:89] found id: ""
	I0229 19:00:12.659906   47919 logs.go:276] 0 containers: []
	W0229 19:00:12.659917   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:12.659925   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:12.659975   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:12.699133   47919 cri.go:89] found id: ""
	I0229 19:00:12.699160   47919 logs.go:276] 0 containers: []
	W0229 19:00:12.699170   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:12.699177   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:12.699188   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:12.742164   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:12.742189   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:12.792215   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:12.792248   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:12.808322   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:12.808344   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:12.879089   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:12.879114   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:12.879129   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:12.586572   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:15.083323   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:13.687899   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:16.184671   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:14.521430   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:17.013799   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:19.014661   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:15.466778   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:15.480875   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:15.480945   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:15.525331   47919 cri.go:89] found id: ""
	I0229 19:00:15.525353   47919 logs.go:276] 0 containers: []
	W0229 19:00:15.525360   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:15.525366   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:15.525422   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:15.567787   47919 cri.go:89] found id: ""
	I0229 19:00:15.567819   47919 logs.go:276] 0 containers: []
	W0229 19:00:15.567831   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:15.567838   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:15.567923   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:15.609440   47919 cri.go:89] found id: ""
	I0229 19:00:15.609467   47919 logs.go:276] 0 containers: []
	W0229 19:00:15.609477   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:15.609484   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:15.609559   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:15.650113   47919 cri.go:89] found id: ""
	I0229 19:00:15.650142   47919 logs.go:276] 0 containers: []
	W0229 19:00:15.650153   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:15.650161   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:15.650223   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:15.691499   47919 cri.go:89] found id: ""
	I0229 19:00:15.691527   47919 logs.go:276] 0 containers: []
	W0229 19:00:15.691537   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:15.691544   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:15.691603   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:15.731199   47919 cri.go:89] found id: ""
	I0229 19:00:15.731227   47919 logs.go:276] 0 containers: []
	W0229 19:00:15.731239   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:15.731246   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:15.731324   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:15.772997   47919 cri.go:89] found id: ""
	I0229 19:00:15.773019   47919 logs.go:276] 0 containers: []
	W0229 19:00:15.773027   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:15.773032   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:15.773091   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:15.811223   47919 cri.go:89] found id: ""
	I0229 19:00:15.811244   47919 logs.go:276] 0 containers: []
	W0229 19:00:15.811252   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:15.811271   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:15.811283   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:15.862159   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:15.862196   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:15.877436   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:15.877460   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:15.948486   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:15.948513   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:15.948525   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:16.030585   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:16.030617   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:18.592020   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:18.607286   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:18.607368   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:18.647886   47919 cri.go:89] found id: ""
	I0229 19:00:18.647913   47919 logs.go:276] 0 containers: []
	W0229 19:00:18.647924   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:18.647951   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:18.648007   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:18.687394   47919 cri.go:89] found id: ""
	I0229 19:00:18.687420   47919 logs.go:276] 0 containers: []
	W0229 19:00:18.687430   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:18.687436   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:18.687491   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:18.734159   47919 cri.go:89] found id: ""
	I0229 19:00:18.734187   47919 logs.go:276] 0 containers: []
	W0229 19:00:18.734198   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:18.734205   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:18.734262   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:18.782950   47919 cri.go:89] found id: ""
	I0229 19:00:18.782989   47919 logs.go:276] 0 containers: []
	W0229 19:00:18.783000   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:18.783008   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:18.783089   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:18.818695   47919 cri.go:89] found id: ""
	I0229 19:00:18.818723   47919 logs.go:276] 0 containers: []
	W0229 19:00:18.818734   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:18.818742   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:18.818805   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:18.859479   47919 cri.go:89] found id: ""
	I0229 19:00:18.859504   47919 logs.go:276] 0 containers: []
	W0229 19:00:18.859515   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:18.859522   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:18.859580   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:18.902897   47919 cri.go:89] found id: ""
	I0229 19:00:18.902923   47919 logs.go:276] 0 containers: []
	W0229 19:00:18.902934   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:18.902942   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:18.903002   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:18.947708   47919 cri.go:89] found id: ""
	I0229 19:00:18.947731   47919 logs.go:276] 0 containers: []
	W0229 19:00:18.947742   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:18.947752   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:18.947772   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:19.025069   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:19.025092   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:19.025107   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:19.115589   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:19.115626   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:19.164930   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:19.164960   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:19.217497   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:19.217531   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:17.584961   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:20.081558   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:18.685924   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:21.184830   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:21.015314   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:23.513573   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:21.733516   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:21.748586   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:21.748648   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:21.788383   47919 cri.go:89] found id: ""
	I0229 19:00:21.788409   47919 logs.go:276] 0 containers: []
	W0229 19:00:21.788420   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:21.788429   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:21.788487   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:21.827147   47919 cri.go:89] found id: ""
	I0229 19:00:21.827176   47919 logs.go:276] 0 containers: []
	W0229 19:00:21.827187   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:21.827194   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:21.827255   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:21.867525   47919 cri.go:89] found id: ""
	I0229 19:00:21.867552   47919 logs.go:276] 0 containers: []
	W0229 19:00:21.867561   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:21.867570   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:21.867618   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:21.911542   47919 cri.go:89] found id: ""
	I0229 19:00:21.911564   47919 logs.go:276] 0 containers: []
	W0229 19:00:21.911573   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:21.911578   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:21.911629   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:21.949779   47919 cri.go:89] found id: ""
	I0229 19:00:21.949803   47919 logs.go:276] 0 containers: []
	W0229 19:00:21.949815   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:21.949821   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:21.949877   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:21.989663   47919 cri.go:89] found id: ""
	I0229 19:00:21.989692   47919 logs.go:276] 0 containers: []
	W0229 19:00:21.989701   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:21.989706   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:21.989750   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:22.040777   47919 cri.go:89] found id: ""
	I0229 19:00:22.040803   47919 logs.go:276] 0 containers: []
	W0229 19:00:22.040813   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:22.040820   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:22.040876   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:22.100661   47919 cri.go:89] found id: ""
	I0229 19:00:22.100682   47919 logs.go:276] 0 containers: []
	W0229 19:00:22.100689   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:22.100697   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:22.100707   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:22.165652   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:22.165682   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:22.180278   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:22.180301   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:22.250220   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:22.250242   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:22.250254   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:22.339122   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:22.339160   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:24.894485   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:24.910480   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:24.910555   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:22.086489   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:24.582331   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:23.685199   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:26.185268   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:25.514168   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:28.014178   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:24.949857   47919 cri.go:89] found id: ""
	I0229 19:00:24.949880   47919 logs.go:276] 0 containers: []
	W0229 19:00:24.949891   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:24.949898   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:24.949968   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:24.993325   47919 cri.go:89] found id: ""
	I0229 19:00:24.993355   47919 logs.go:276] 0 containers: []
	W0229 19:00:24.993366   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:24.993374   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:24.993431   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:25.053180   47919 cri.go:89] found id: ""
	I0229 19:00:25.053201   47919 logs.go:276] 0 containers: []
	W0229 19:00:25.053208   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:25.053214   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:25.053269   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:25.105886   47919 cri.go:89] found id: ""
	I0229 19:00:25.105912   47919 logs.go:276] 0 containers: []
	W0229 19:00:25.105919   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:25.105924   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:25.105969   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:25.161860   47919 cri.go:89] found id: ""
	I0229 19:00:25.161889   47919 logs.go:276] 0 containers: []
	W0229 19:00:25.161907   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:25.161918   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:25.161982   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:25.208566   47919 cri.go:89] found id: ""
	I0229 19:00:25.208591   47919 logs.go:276] 0 containers: []
	W0229 19:00:25.208601   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:25.208625   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:25.208690   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:25.252151   47919 cri.go:89] found id: ""
	I0229 19:00:25.252173   47919 logs.go:276] 0 containers: []
	W0229 19:00:25.252183   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:25.252190   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:25.252255   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:25.293860   47919 cri.go:89] found id: ""
	I0229 19:00:25.293892   47919 logs.go:276] 0 containers: []
	W0229 19:00:25.293903   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:25.293913   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:25.293926   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:25.343332   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:25.343367   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:25.357855   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:25.357883   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:25.438031   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:25.438052   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:25.438064   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:25.523752   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:25.523789   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:28.078701   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:28.103422   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:28.103514   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:28.149369   47919 cri.go:89] found id: ""
	I0229 19:00:28.149396   47919 logs.go:276] 0 containers: []
	W0229 19:00:28.149407   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:28.149414   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:28.149481   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:28.191312   47919 cri.go:89] found id: ""
	I0229 19:00:28.191340   47919 logs.go:276] 0 containers: []
	W0229 19:00:28.191350   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:28.191357   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:28.191422   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:28.232257   47919 cri.go:89] found id: ""
	I0229 19:00:28.232283   47919 logs.go:276] 0 containers: []
	W0229 19:00:28.232293   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:28.232301   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:28.232370   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:28.278477   47919 cri.go:89] found id: ""
	I0229 19:00:28.278502   47919 logs.go:276] 0 containers: []
	W0229 19:00:28.278512   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:28.278520   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:28.278580   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:28.319368   47919 cri.go:89] found id: ""
	I0229 19:00:28.319393   47919 logs.go:276] 0 containers: []
	W0229 19:00:28.319401   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:28.319406   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:28.319451   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:28.363604   47919 cri.go:89] found id: ""
	I0229 19:00:28.363628   47919 logs.go:276] 0 containers: []
	W0229 19:00:28.363636   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:28.363642   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:28.363688   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:28.403101   47919 cri.go:89] found id: ""
	I0229 19:00:28.403126   47919 logs.go:276] 0 containers: []
	W0229 19:00:28.403137   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:28.403144   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:28.403203   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:28.443915   47919 cri.go:89] found id: ""
	I0229 19:00:28.443939   47919 logs.go:276] 0 containers: []
	W0229 19:00:28.443949   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:28.443961   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:28.443974   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:28.459084   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:28.459112   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:28.531798   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:28.531827   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:28.531843   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:28.618141   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:28.618182   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:28.664993   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:28.665024   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:26.582801   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:28.584979   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:28.684541   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:31.184185   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:30.014681   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:32.513959   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:31.218793   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:31.234816   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:31.234890   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:31.273656   47919 cri.go:89] found id: ""
	I0229 19:00:31.273684   47919 logs.go:276] 0 containers: []
	W0229 19:00:31.273692   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:31.273698   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:31.273744   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:31.316292   47919 cri.go:89] found id: ""
	I0229 19:00:31.316314   47919 logs.go:276] 0 containers: []
	W0229 19:00:31.316322   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:31.316330   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:31.316391   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:31.356701   47919 cri.go:89] found id: ""
	I0229 19:00:31.356730   47919 logs.go:276] 0 containers: []
	W0229 19:00:31.356742   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:31.356760   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:31.356813   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:31.395796   47919 cri.go:89] found id: ""
	I0229 19:00:31.395822   47919 logs.go:276] 0 containers: []
	W0229 19:00:31.395830   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:31.395835   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:31.395884   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:31.436461   47919 cri.go:89] found id: ""
	I0229 19:00:31.436483   47919 logs.go:276] 0 containers: []
	W0229 19:00:31.436491   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:31.436496   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:31.436543   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:31.482802   47919 cri.go:89] found id: ""
	I0229 19:00:31.482830   47919 logs.go:276] 0 containers: []
	W0229 19:00:31.482840   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:31.482848   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:31.482895   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:31.525897   47919 cri.go:89] found id: ""
	I0229 19:00:31.525930   47919 logs.go:276] 0 containers: []
	W0229 19:00:31.525939   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:31.525949   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:31.526009   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:31.566323   47919 cri.go:89] found id: ""
	I0229 19:00:31.566350   47919 logs.go:276] 0 containers: []
	W0229 19:00:31.566362   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:31.566372   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:31.566388   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:31.618633   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:31.618674   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:31.634144   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:31.634166   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:31.712112   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:31.712136   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:31.712150   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:31.795159   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:31.795190   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:34.365419   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:34.380447   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:34.380521   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:34.422256   47919 cri.go:89] found id: ""
	I0229 19:00:34.422284   47919 logs.go:276] 0 containers: []
	W0229 19:00:34.422295   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:34.422302   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:34.422359   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:34.466548   47919 cri.go:89] found id: ""
	I0229 19:00:34.466578   47919 logs.go:276] 0 containers: []
	W0229 19:00:34.466588   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:34.466596   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:34.466654   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:34.508359   47919 cri.go:89] found id: ""
	I0229 19:00:34.508395   47919 logs.go:276] 0 containers: []
	W0229 19:00:34.508407   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:34.508414   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:34.508482   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:34.551284   47919 cri.go:89] found id: ""
	I0229 19:00:34.551308   47919 logs.go:276] 0 containers: []
	W0229 19:00:34.551319   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:34.551325   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:34.551371   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:34.593360   47919 cri.go:89] found id: ""
	I0229 19:00:34.593385   47919 logs.go:276] 0 containers: []
	W0229 19:00:34.593395   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:34.593403   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:34.593469   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:34.632097   47919 cri.go:89] found id: ""
	I0229 19:00:34.632117   47919 logs.go:276] 0 containers: []
	W0229 19:00:34.632124   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:34.632135   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:34.632180   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:34.679495   47919 cri.go:89] found id: ""
	I0229 19:00:34.679521   47919 logs.go:276] 0 containers: []
	W0229 19:00:34.679529   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:34.679534   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:34.679580   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:34.723322   47919 cri.go:89] found id: ""
	I0229 19:00:34.723351   47919 logs.go:276] 0 containers: []
	W0229 19:00:34.723361   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:34.723371   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:34.723387   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:34.741497   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:34.741525   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:34.833908   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:34.833932   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:34.833944   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:34.927172   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:34.927203   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:31.083690   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:33.583972   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:33.186129   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:35.685350   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:34.514619   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:36.514937   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:39.014137   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:34.980487   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:34.980520   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:37.535829   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:37.551274   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:37.551342   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:37.590225   47919 cri.go:89] found id: ""
	I0229 19:00:37.590263   47919 logs.go:276] 0 containers: []
	W0229 19:00:37.590282   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:37.590289   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:37.590347   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:37.630546   47919 cri.go:89] found id: ""
	I0229 19:00:37.630574   47919 logs.go:276] 0 containers: []
	W0229 19:00:37.630585   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:37.630592   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:37.630651   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:37.676219   47919 cri.go:89] found id: ""
	I0229 19:00:37.676250   47919 logs.go:276] 0 containers: []
	W0229 19:00:37.676261   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:37.676268   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:37.676329   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:37.713689   47919 cri.go:89] found id: ""
	I0229 19:00:37.713712   47919 logs.go:276] 0 containers: []
	W0229 19:00:37.713721   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:37.713729   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:37.713791   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:37.767999   47919 cri.go:89] found id: ""
	I0229 19:00:37.768034   47919 logs.go:276] 0 containers: []
	W0229 19:00:37.768049   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:37.768057   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:37.768114   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:37.816836   47919 cri.go:89] found id: ""
	I0229 19:00:37.816865   47919 logs.go:276] 0 containers: []
	W0229 19:00:37.816876   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:37.816884   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:37.816948   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:37.876044   47919 cri.go:89] found id: ""
	I0229 19:00:37.876072   47919 logs.go:276] 0 containers: []
	W0229 19:00:37.876084   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:37.876091   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:37.876151   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:37.926075   47919 cri.go:89] found id: ""
	I0229 19:00:37.926110   47919 logs.go:276] 0 containers: []
	W0229 19:00:37.926122   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:37.926132   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:37.926147   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:38.004621   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:38.004648   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:38.004663   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:38.091456   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:38.091493   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:38.140118   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:38.140144   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:38.197206   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:38.197243   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:35.587937   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:38.082516   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:40.083269   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:38.184999   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:40.684029   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:42.684537   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:41.016248   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:43.018730   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:40.713817   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:40.731550   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:40.731613   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:40.787760   47919 cri.go:89] found id: ""
	I0229 19:00:40.787788   47919 logs.go:276] 0 containers: []
	W0229 19:00:40.787798   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:40.787806   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:40.787868   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:40.847842   47919 cri.go:89] found id: ""
	I0229 19:00:40.847870   47919 logs.go:276] 0 containers: []
	W0229 19:00:40.847881   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:40.847888   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:40.847956   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:40.888452   47919 cri.go:89] found id: ""
	I0229 19:00:40.888481   47919 logs.go:276] 0 containers: []
	W0229 19:00:40.888493   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:40.888501   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:40.888562   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:40.927727   47919 cri.go:89] found id: ""
	I0229 19:00:40.927749   47919 logs.go:276] 0 containers: []
	W0229 19:00:40.927757   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:40.927762   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:40.927821   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:40.967696   47919 cri.go:89] found id: ""
	I0229 19:00:40.967725   47919 logs.go:276] 0 containers: []
	W0229 19:00:40.967737   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:40.967745   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:40.967804   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:41.008092   47919 cri.go:89] found id: ""
	I0229 19:00:41.008117   47919 logs.go:276] 0 containers: []
	W0229 19:00:41.008127   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:41.008135   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:41.008190   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:41.049235   47919 cri.go:89] found id: ""
	I0229 19:00:41.049265   47919 logs.go:276] 0 containers: []
	W0229 19:00:41.049277   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:41.049285   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:41.049393   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:41.092962   47919 cri.go:89] found id: ""
	I0229 19:00:41.092988   47919 logs.go:276] 0 containers: []
	W0229 19:00:41.092999   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:41.093018   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:41.093033   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:41.146322   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:41.146368   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:41.161961   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:41.161986   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:41.248674   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:41.248705   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:41.248732   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:41.333647   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:41.333689   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:43.882007   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:43.897786   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:43.897860   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:43.943918   47919 cri.go:89] found id: ""
	I0229 19:00:43.943946   47919 logs.go:276] 0 containers: []
	W0229 19:00:43.943955   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:43.943960   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:43.944010   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:43.988622   47919 cri.go:89] found id: ""
	I0229 19:00:43.988643   47919 logs.go:276] 0 containers: []
	W0229 19:00:43.988650   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:43.988655   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:43.988699   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:44.036419   47919 cri.go:89] found id: ""
	I0229 19:00:44.036455   47919 logs.go:276] 0 containers: []
	W0229 19:00:44.036466   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:44.036471   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:44.036530   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:44.078018   47919 cri.go:89] found id: ""
	I0229 19:00:44.078046   47919 logs.go:276] 0 containers: []
	W0229 19:00:44.078056   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:44.078063   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:44.078119   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:44.116142   47919 cri.go:89] found id: ""
	I0229 19:00:44.116168   47919 logs.go:276] 0 containers: []
	W0229 19:00:44.116177   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:44.116183   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:44.116243   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:44.158804   47919 cri.go:89] found id: ""
	I0229 19:00:44.158826   47919 logs.go:276] 0 containers: []
	W0229 19:00:44.158833   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:44.158839   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:44.158889   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:44.204069   47919 cri.go:89] found id: ""
	I0229 19:00:44.204096   47919 logs.go:276] 0 containers: []
	W0229 19:00:44.204106   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:44.204114   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:44.204173   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:44.247904   47919 cri.go:89] found id: ""
	I0229 19:00:44.247935   47919 logs.go:276] 0 containers: []
	W0229 19:00:44.247949   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:44.247959   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:44.247973   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:44.338653   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:44.338690   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:44.384041   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:44.384069   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:44.439539   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:44.439575   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:44.455345   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:44.455372   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:44.538204   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:42.083656   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:44.584493   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:45.184119   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:47.684925   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:45.513638   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:48.014638   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:47.038895   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:47.054457   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:47.054539   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:47.099854   47919 cri.go:89] found id: ""
	I0229 19:00:47.099879   47919 logs.go:276] 0 containers: []
	W0229 19:00:47.099890   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:47.099899   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:47.099956   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:47.141354   47919 cri.go:89] found id: ""
	I0229 19:00:47.141381   47919 logs.go:276] 0 containers: []
	W0229 19:00:47.141391   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:47.141398   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:47.141454   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:47.181906   47919 cri.go:89] found id: ""
	I0229 19:00:47.181932   47919 logs.go:276] 0 containers: []
	W0229 19:00:47.181942   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:47.181949   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:47.182003   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:47.222505   47919 cri.go:89] found id: ""
	I0229 19:00:47.222530   47919 logs.go:276] 0 containers: []
	W0229 19:00:47.222538   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:47.222548   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:47.222603   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:47.265567   47919 cri.go:89] found id: ""
	I0229 19:00:47.265604   47919 logs.go:276] 0 containers: []
	W0229 19:00:47.265616   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:47.265625   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:47.265690   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:47.304698   47919 cri.go:89] found id: ""
	I0229 19:00:47.304723   47919 logs.go:276] 0 containers: []
	W0229 19:00:47.304730   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:47.304736   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:47.304781   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:47.344154   47919 cri.go:89] found id: ""
	I0229 19:00:47.344175   47919 logs.go:276] 0 containers: []
	W0229 19:00:47.344182   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:47.344187   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:47.344230   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:47.383849   47919 cri.go:89] found id: ""
	I0229 19:00:47.383878   47919 logs.go:276] 0 containers: []
	W0229 19:00:47.383889   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:47.383900   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:47.383915   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:47.458895   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:47.458914   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:47.458933   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:47.547776   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:47.547823   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:47.622606   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:47.622639   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:47.685327   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:47.685356   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:47.084225   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:49.584008   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:50.186274   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:52.684452   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:50.014671   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:52.514321   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:50.202151   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:50.218008   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:50.218063   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:50.265322   47919 cri.go:89] found id: ""
	I0229 19:00:50.265345   47919 logs.go:276] 0 containers: []
	W0229 19:00:50.265353   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:50.265358   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:50.265424   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:50.305646   47919 cri.go:89] found id: ""
	I0229 19:00:50.305669   47919 logs.go:276] 0 containers: []
	W0229 19:00:50.305677   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:50.305682   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:50.305732   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:50.342855   47919 cri.go:89] found id: ""
	I0229 19:00:50.342885   47919 logs.go:276] 0 containers: []
	W0229 19:00:50.342894   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:50.342899   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:50.342948   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:50.385365   47919 cri.go:89] found id: ""
	I0229 19:00:50.385396   47919 logs.go:276] 0 containers: []
	W0229 19:00:50.385404   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:50.385410   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:50.385456   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:50.425212   47919 cri.go:89] found id: ""
	I0229 19:00:50.425238   47919 logs.go:276] 0 containers: []
	W0229 19:00:50.425256   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:50.425263   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:50.425321   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:50.465325   47919 cri.go:89] found id: ""
	I0229 19:00:50.465355   47919 logs.go:276] 0 containers: []
	W0229 19:00:50.465366   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:50.465382   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:50.465455   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:50.516256   47919 cri.go:89] found id: ""
	I0229 19:00:50.516282   47919 logs.go:276] 0 containers: []
	W0229 19:00:50.516291   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:50.516297   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:50.516355   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:50.562233   47919 cri.go:89] found id: ""
	I0229 19:00:50.562262   47919 logs.go:276] 0 containers: []
	W0229 19:00:50.562272   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:50.562280   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:50.562292   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:50.660311   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:50.660346   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:50.702790   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:50.702815   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:50.752085   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:50.752123   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:50.768346   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:50.768378   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:50.842567   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:53.343011   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:53.358002   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:53.358072   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:53.398397   47919 cri.go:89] found id: ""
	I0229 19:00:53.398424   47919 logs.go:276] 0 containers: []
	W0229 19:00:53.398433   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:53.398440   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:53.398501   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:53.437020   47919 cri.go:89] found id: ""
	I0229 19:00:53.437048   47919 logs.go:276] 0 containers: []
	W0229 19:00:53.437059   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:53.437067   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:53.437116   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:53.473350   47919 cri.go:89] found id: ""
	I0229 19:00:53.473377   47919 logs.go:276] 0 containers: []
	W0229 19:00:53.473388   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:53.473395   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:53.473454   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:53.525678   47919 cri.go:89] found id: ""
	I0229 19:00:53.525701   47919 logs.go:276] 0 containers: []
	W0229 19:00:53.525708   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:53.525716   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:53.525772   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:53.595411   47919 cri.go:89] found id: ""
	I0229 19:00:53.595437   47919 logs.go:276] 0 containers: []
	W0229 19:00:53.595448   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:53.595456   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:53.595518   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:53.635890   47919 cri.go:89] found id: ""
	I0229 19:00:53.635916   47919 logs.go:276] 0 containers: []
	W0229 19:00:53.635923   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:53.635929   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:53.635992   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:53.674966   47919 cri.go:89] found id: ""
	I0229 19:00:53.674992   47919 logs.go:276] 0 containers: []
	W0229 19:00:53.675000   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:53.675005   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:53.675076   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:53.713839   47919 cri.go:89] found id: ""
	I0229 19:00:53.713860   47919 logs.go:276] 0 containers: []
	W0229 19:00:53.713868   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:53.713882   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:53.713896   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:53.765185   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:53.765219   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:53.780830   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:53.780855   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:53.858528   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:53.858552   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:53.858567   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:53.936002   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:53.936034   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:52.085082   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:54.583306   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:55.184645   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:57.684780   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:55.015395   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:57.015941   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:59.017683   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:56.481406   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:56.498980   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:56.499059   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:56.557482   47919 cri.go:89] found id: ""
	I0229 19:00:56.557509   47919 logs.go:276] 0 containers: []
	W0229 19:00:56.557520   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:56.557528   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:56.557587   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:56.625912   47919 cri.go:89] found id: ""
	I0229 19:00:56.625941   47919 logs.go:276] 0 containers: []
	W0229 19:00:56.625952   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:56.625964   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:56.626023   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:56.663104   47919 cri.go:89] found id: ""
	I0229 19:00:56.663193   47919 logs.go:276] 0 containers: []
	W0229 19:00:56.663210   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:56.663217   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:56.663265   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:56.707473   47919 cri.go:89] found id: ""
	I0229 19:00:56.707494   47919 logs.go:276] 0 containers: []
	W0229 19:00:56.707502   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:56.707507   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:56.707564   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:56.752569   47919 cri.go:89] found id: ""
	I0229 19:00:56.752593   47919 logs.go:276] 0 containers: []
	W0229 19:00:56.752604   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:56.752611   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:56.752673   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:56.793618   47919 cri.go:89] found id: ""
	I0229 19:00:56.793660   47919 logs.go:276] 0 containers: []
	W0229 19:00:56.793672   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:56.793680   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:56.793741   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:56.833215   47919 cri.go:89] found id: ""
	I0229 19:00:56.833241   47919 logs.go:276] 0 containers: []
	W0229 19:00:56.833252   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:56.833259   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:56.833319   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:56.873162   47919 cri.go:89] found id: ""
	I0229 19:00:56.873187   47919 logs.go:276] 0 containers: []
	W0229 19:00:56.873195   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:56.873203   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:56.873219   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:56.887683   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:56.887707   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:56.957351   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:56.957369   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:56.957380   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:57.042415   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:57.042449   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:57.087636   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:57.087660   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:59.637662   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:59.652747   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:59.652815   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:59.692780   47919 cri.go:89] found id: ""
	I0229 19:00:59.692801   47919 logs.go:276] 0 containers: []
	W0229 19:00:59.692809   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:59.692814   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:59.692891   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:59.733445   47919 cri.go:89] found id: ""
	I0229 19:00:59.733474   47919 logs.go:276] 0 containers: []
	W0229 19:00:59.733482   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:59.733488   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:59.733535   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:59.769723   47919 cri.go:89] found id: ""
	I0229 19:00:59.769754   47919 logs.go:276] 0 containers: []
	W0229 19:00:59.769764   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:59.769770   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:59.769828   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:59.807810   47919 cri.go:89] found id: ""
	I0229 19:00:59.807837   47919 logs.go:276] 0 containers: []
	W0229 19:00:59.807848   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:59.807855   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:59.807916   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:59.849623   47919 cri.go:89] found id: ""
	I0229 19:00:59.849649   47919 logs.go:276] 0 containers: []
	W0229 19:00:59.849659   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:59.849666   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:59.849730   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:59.895593   47919 cri.go:89] found id: ""
	I0229 19:00:59.895620   47919 logs.go:276] 0 containers: []
	W0229 19:00:59.895631   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:59.895638   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:59.895698   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:59.935693   47919 cri.go:89] found id: ""
	I0229 19:00:59.935716   47919 logs.go:276] 0 containers: []
	W0229 19:00:59.935724   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:59.935729   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:59.935786   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:56.585093   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:59.083485   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:59.687672   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:02.184276   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:01.027786   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:03.514296   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:59.977655   47919 cri.go:89] found id: ""
	I0229 19:00:59.977685   47919 logs.go:276] 0 containers: []
	W0229 19:00:59.977693   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:59.977710   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:59.977725   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:59.992518   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:59.992545   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:00.075660   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:00.075679   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:00.075691   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:00.162338   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:00.162384   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:00.207000   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:00.207049   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:02.759942   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:02.776225   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:02.776293   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:02.812511   47919 cri.go:89] found id: ""
	I0229 19:01:02.812538   47919 logs.go:276] 0 containers: []
	W0229 19:01:02.812549   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:02.812556   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:02.812614   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:02.851417   47919 cri.go:89] found id: ""
	I0229 19:01:02.851448   47919 logs.go:276] 0 containers: []
	W0229 19:01:02.851467   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:02.851483   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:02.851560   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:02.894440   47919 cri.go:89] found id: ""
	I0229 19:01:02.894465   47919 logs.go:276] 0 containers: []
	W0229 19:01:02.894475   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:02.894487   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:02.894542   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:02.931046   47919 cri.go:89] found id: ""
	I0229 19:01:02.931075   47919 logs.go:276] 0 containers: []
	W0229 19:01:02.931084   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:02.931092   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:02.931150   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:02.971204   47919 cri.go:89] found id: ""
	I0229 19:01:02.971226   47919 logs.go:276] 0 containers: []
	W0229 19:01:02.971233   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:02.971238   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:02.971307   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:03.011695   47919 cri.go:89] found id: ""
	I0229 19:01:03.011723   47919 logs.go:276] 0 containers: []
	W0229 19:01:03.011734   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:03.011741   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:03.011796   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:03.054738   47919 cri.go:89] found id: ""
	I0229 19:01:03.054763   47919 logs.go:276] 0 containers: []
	W0229 19:01:03.054775   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:03.054782   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:03.054857   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:03.099242   47919 cri.go:89] found id: ""
	I0229 19:01:03.099267   47919 logs.go:276] 0 containers: []
	W0229 19:01:03.099278   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:03.099289   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:03.099303   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:03.148748   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:03.148778   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:03.164550   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:03.164578   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:03.241564   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:03.241586   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:03.241601   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:03.329350   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:03.329384   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:01.085890   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:03.582960   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:04.683846   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:06.684979   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:05.514444   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:08.014275   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:05.884415   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:05.901979   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:05.902044   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:05.946382   47919 cri.go:89] found id: ""
	I0229 19:01:05.946407   47919 logs.go:276] 0 containers: []
	W0229 19:01:05.946415   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:05.946421   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:05.946488   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:05.991783   47919 cri.go:89] found id: ""
	I0229 19:01:05.991807   47919 logs.go:276] 0 containers: []
	W0229 19:01:05.991816   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:05.991822   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:05.991879   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:06.034390   47919 cri.go:89] found id: ""
	I0229 19:01:06.034417   47919 logs.go:276] 0 containers: []
	W0229 19:01:06.034426   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:06.034431   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:06.034475   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:06.078417   47919 cri.go:89] found id: ""
	I0229 19:01:06.078445   47919 logs.go:276] 0 containers: []
	W0229 19:01:06.078456   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:06.078463   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:06.078527   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:06.119892   47919 cri.go:89] found id: ""
	I0229 19:01:06.119927   47919 logs.go:276] 0 containers: []
	W0229 19:01:06.119938   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:06.119952   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:06.120008   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:06.159308   47919 cri.go:89] found id: ""
	I0229 19:01:06.159332   47919 logs.go:276] 0 containers: []
	W0229 19:01:06.159339   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:06.159346   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:06.159410   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:06.208715   47919 cri.go:89] found id: ""
	I0229 19:01:06.208742   47919 logs.go:276] 0 containers: []
	W0229 19:01:06.208751   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:06.208756   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:06.208812   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:06.253831   47919 cri.go:89] found id: ""
	I0229 19:01:06.253858   47919 logs.go:276] 0 containers: []
	W0229 19:01:06.253866   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:06.253881   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:06.253895   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:06.315105   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:06.315141   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:06.349340   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:06.349386   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:06.431456   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:06.431477   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:06.431492   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:06.517754   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:06.517783   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:09.064267   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:09.078751   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:09.078822   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:09.130371   47919 cri.go:89] found id: ""
	I0229 19:01:09.130396   47919 logs.go:276] 0 containers: []
	W0229 19:01:09.130404   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:09.130410   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:09.130461   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:09.166312   47919 cri.go:89] found id: ""
	I0229 19:01:09.166340   47919 logs.go:276] 0 containers: []
	W0229 19:01:09.166351   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:09.166359   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:09.166415   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:09.202957   47919 cri.go:89] found id: ""
	I0229 19:01:09.202978   47919 logs.go:276] 0 containers: []
	W0229 19:01:09.202985   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:09.202991   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:09.203050   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:09.242350   47919 cri.go:89] found id: ""
	I0229 19:01:09.242380   47919 logs.go:276] 0 containers: []
	W0229 19:01:09.242391   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:09.242399   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:09.242455   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:09.300471   47919 cri.go:89] found id: ""
	I0229 19:01:09.300492   47919 logs.go:276] 0 containers: []
	W0229 19:01:09.300500   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:09.300505   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:09.300568   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:09.356861   47919 cri.go:89] found id: ""
	I0229 19:01:09.356886   47919 logs.go:276] 0 containers: []
	W0229 19:01:09.356893   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:09.356898   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:09.356965   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:09.411042   47919 cri.go:89] found id: ""
	I0229 19:01:09.411067   47919 logs.go:276] 0 containers: []
	W0229 19:01:09.411075   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:09.411080   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:09.411136   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:09.446312   47919 cri.go:89] found id: ""
	I0229 19:01:09.446336   47919 logs.go:276] 0 containers: []
	W0229 19:01:09.446347   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:09.446356   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:09.446367   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:09.492195   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:09.492227   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:09.541943   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:09.541973   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:09.557347   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:09.557373   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:09.635319   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:09.635363   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:09.635379   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:05.584255   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:08.082899   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:10.083808   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:09.189158   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:11.684731   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:10.513801   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:12.514492   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:12.224271   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:12.243330   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:12.243403   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:12.285525   47919 cri.go:89] found id: ""
	I0229 19:01:12.285547   47919 logs.go:276] 0 containers: []
	W0229 19:01:12.285556   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:12.285561   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:12.285617   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:12.347511   47919 cri.go:89] found id: ""
	I0229 19:01:12.347535   47919 logs.go:276] 0 containers: []
	W0229 19:01:12.347543   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:12.347548   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:12.347593   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:12.392145   47919 cri.go:89] found id: ""
	I0229 19:01:12.392207   47919 logs.go:276] 0 containers: []
	W0229 19:01:12.392231   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:12.392248   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:12.392366   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:12.430238   47919 cri.go:89] found id: ""
	I0229 19:01:12.430268   47919 logs.go:276] 0 containers: []
	W0229 19:01:12.430278   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:12.430286   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:12.430345   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:12.473019   47919 cri.go:89] found id: ""
	I0229 19:01:12.473054   47919 logs.go:276] 0 containers: []
	W0229 19:01:12.473065   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:12.473072   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:12.473131   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:12.510653   47919 cri.go:89] found id: ""
	I0229 19:01:12.510681   47919 logs.go:276] 0 containers: []
	W0229 19:01:12.510692   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:12.510699   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:12.510759   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:12.548137   47919 cri.go:89] found id: ""
	I0229 19:01:12.548163   47919 logs.go:276] 0 containers: []
	W0229 19:01:12.548171   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:12.548176   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:12.548232   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:12.588416   47919 cri.go:89] found id: ""
	I0229 19:01:12.588435   47919 logs.go:276] 0 containers: []
	W0229 19:01:12.588443   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:12.588452   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:12.588467   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:12.603651   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:12.603681   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:12.681060   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:12.681081   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:12.681094   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:12.764839   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:12.764870   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:12.807178   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:12.807202   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:12.583319   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:14.583681   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:14.184569   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:16.185919   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:14.514955   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:17.014358   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:19.016452   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:15.357205   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:15.382491   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:15.382571   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:15.422538   47919 cri.go:89] found id: ""
	I0229 19:01:15.422561   47919 logs.go:276] 0 containers: []
	W0229 19:01:15.422568   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:15.422577   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:15.422635   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:15.464564   47919 cri.go:89] found id: ""
	I0229 19:01:15.464593   47919 logs.go:276] 0 containers: []
	W0229 19:01:15.464601   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:15.464607   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:15.464662   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:15.502625   47919 cri.go:89] found id: ""
	I0229 19:01:15.502650   47919 logs.go:276] 0 containers: []
	W0229 19:01:15.502662   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:15.502669   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:15.502724   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:15.543187   47919 cri.go:89] found id: ""
	I0229 19:01:15.543215   47919 logs.go:276] 0 containers: []
	W0229 19:01:15.543229   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:15.543234   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:15.543283   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:15.585273   47919 cri.go:89] found id: ""
	I0229 19:01:15.585296   47919 logs.go:276] 0 containers: []
	W0229 19:01:15.585306   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:15.585314   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:15.585386   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:15.626180   47919 cri.go:89] found id: ""
	I0229 19:01:15.626208   47919 logs.go:276] 0 containers: []
	W0229 19:01:15.626219   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:15.626227   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:15.626288   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:15.670572   47919 cri.go:89] found id: ""
	I0229 19:01:15.670596   47919 logs.go:276] 0 containers: []
	W0229 19:01:15.670604   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:15.670610   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:15.670657   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:15.710549   47919 cri.go:89] found id: ""
	I0229 19:01:15.710587   47919 logs.go:276] 0 containers: []
	W0229 19:01:15.710595   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:15.710604   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:15.710618   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:15.765148   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:15.765180   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:15.780717   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:15.780742   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:15.852811   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:15.852835   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:15.852856   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:15.930728   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:15.930759   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:18.483798   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:18.497545   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:18.497611   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:18.540226   47919 cri.go:89] found id: ""
	I0229 19:01:18.540256   47919 logs.go:276] 0 containers: []
	W0229 19:01:18.540266   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:18.540274   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:18.540336   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:18.578106   47919 cri.go:89] found id: ""
	I0229 19:01:18.578124   47919 logs.go:276] 0 containers: []
	W0229 19:01:18.578134   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:18.578142   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:18.578192   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:18.617138   47919 cri.go:89] found id: ""
	I0229 19:01:18.617167   47919 logs.go:276] 0 containers: []
	W0229 19:01:18.617178   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:18.617185   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:18.617242   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:18.654667   47919 cri.go:89] found id: ""
	I0229 19:01:18.654762   47919 logs.go:276] 0 containers: []
	W0229 19:01:18.654779   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:18.654787   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:18.654845   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:18.695837   47919 cri.go:89] found id: ""
	I0229 19:01:18.695859   47919 logs.go:276] 0 containers: []
	W0229 19:01:18.695866   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:18.695875   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:18.695929   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:18.738178   47919 cri.go:89] found id: ""
	I0229 19:01:18.738199   47919 logs.go:276] 0 containers: []
	W0229 19:01:18.738206   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:18.738211   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:18.738259   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:18.777018   47919 cri.go:89] found id: ""
	I0229 19:01:18.777044   47919 logs.go:276] 0 containers: []
	W0229 19:01:18.777052   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:18.777058   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:18.777102   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:18.820701   47919 cri.go:89] found id: ""
	I0229 19:01:18.820723   47919 logs.go:276] 0 containers: []
	W0229 19:01:18.820734   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:18.820746   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:18.820762   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:18.907150   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:18.907182   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:18.950363   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:18.950393   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:18.999446   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:18.999479   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:19.020681   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:19.020714   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:19.139305   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:17.083357   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:19.087286   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:18.684811   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:20.684974   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:22.685289   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:21.513256   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:23.513492   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:21.640062   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:21.654739   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:21.654799   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:21.701885   47919 cri.go:89] found id: ""
	I0229 19:01:21.701912   47919 logs.go:276] 0 containers: []
	W0229 19:01:21.701921   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:21.701929   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:21.701987   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:21.746736   47919 cri.go:89] found id: ""
	I0229 19:01:21.746767   47919 logs.go:276] 0 containers: []
	W0229 19:01:21.746780   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:21.746787   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:21.746847   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:21.784830   47919 cri.go:89] found id: ""
	I0229 19:01:21.784851   47919 logs.go:276] 0 containers: []
	W0229 19:01:21.784859   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:21.784865   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:21.784911   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:21.824122   47919 cri.go:89] found id: ""
	I0229 19:01:21.824151   47919 logs.go:276] 0 containers: []
	W0229 19:01:21.824162   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:21.824171   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:21.824217   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:21.869937   47919 cri.go:89] found id: ""
	I0229 19:01:21.869967   47919 logs.go:276] 0 containers: []
	W0229 19:01:21.869979   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:21.869986   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:21.870043   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:21.909902   47919 cri.go:89] found id: ""
	I0229 19:01:21.909928   47919 logs.go:276] 0 containers: []
	W0229 19:01:21.909939   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:21.909946   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:21.910005   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:21.953980   47919 cri.go:89] found id: ""
	I0229 19:01:21.954021   47919 logs.go:276] 0 containers: []
	W0229 19:01:21.954033   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:21.954040   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:21.954108   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:21.997483   47919 cri.go:89] found id: ""
	I0229 19:01:21.997510   47919 logs.go:276] 0 containers: []
	W0229 19:01:21.997521   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:21.997531   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:21.997546   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:22.108610   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:22.108639   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:22.153571   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:22.153596   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:22.204525   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:22.204555   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:22.219217   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:22.219241   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:22.294794   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:24.795157   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:24.811292   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:24.811363   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:24.854354   47919 cri.go:89] found id: ""
	I0229 19:01:24.854387   47919 logs.go:276] 0 containers: []
	W0229 19:01:24.854396   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:24.854402   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:24.854455   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:24.890800   47919 cri.go:89] found id: ""
	I0229 19:01:24.890828   47919 logs.go:276] 0 containers: []
	W0229 19:01:24.890838   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:24.890844   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:24.890900   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:24.930961   47919 cri.go:89] found id: ""
	I0229 19:01:24.930983   47919 logs.go:276] 0 containers: []
	W0229 19:01:24.930991   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:24.931001   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:24.931073   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:21.582702   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:23.584665   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:25.185732   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:27.683784   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:25.513886   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:28.016852   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:24.968719   47919 cri.go:89] found id: ""
	I0229 19:01:24.968740   47919 logs.go:276] 0 containers: []
	W0229 19:01:24.968747   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:24.968752   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:24.968809   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:25.012723   47919 cri.go:89] found id: ""
	I0229 19:01:25.012746   47919 logs.go:276] 0 containers: []
	W0229 19:01:25.012756   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:25.012763   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:25.012821   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:25.064388   47919 cri.go:89] found id: ""
	I0229 19:01:25.064412   47919 logs.go:276] 0 containers: []
	W0229 19:01:25.064422   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:25.064435   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:25.064496   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:25.122256   47919 cri.go:89] found id: ""
	I0229 19:01:25.122277   47919 logs.go:276] 0 containers: []
	W0229 19:01:25.122286   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:25.122291   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:25.122335   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:25.165487   47919 cri.go:89] found id: ""
	I0229 19:01:25.165515   47919 logs.go:276] 0 containers: []
	W0229 19:01:25.165526   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:25.165536   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:25.165557   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:25.249294   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:25.249333   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:25.297013   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:25.297048   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:25.346276   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:25.346309   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:25.362604   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:25.362635   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:25.434586   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:27.935727   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:27.950680   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:27.950750   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:27.989253   47919 cri.go:89] found id: ""
	I0229 19:01:27.989282   47919 logs.go:276] 0 containers: []
	W0229 19:01:27.989293   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:27.989300   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:27.989357   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:28.039714   47919 cri.go:89] found id: ""
	I0229 19:01:28.039741   47919 logs.go:276] 0 containers: []
	W0229 19:01:28.039750   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:28.039763   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:28.039828   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:28.102860   47919 cri.go:89] found id: ""
	I0229 19:01:28.102886   47919 logs.go:276] 0 containers: []
	W0229 19:01:28.102897   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:28.102904   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:28.102971   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:28.160075   47919 cri.go:89] found id: ""
	I0229 19:01:28.160097   47919 logs.go:276] 0 containers: []
	W0229 19:01:28.160104   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:28.160110   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:28.160180   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:28.200297   47919 cri.go:89] found id: ""
	I0229 19:01:28.200317   47919 logs.go:276] 0 containers: []
	W0229 19:01:28.200325   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:28.200330   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:28.200393   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:28.239912   47919 cri.go:89] found id: ""
	I0229 19:01:28.239944   47919 logs.go:276] 0 containers: []
	W0229 19:01:28.239955   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:28.239963   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:28.240018   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:28.278525   47919 cri.go:89] found id: ""
	I0229 19:01:28.278550   47919 logs.go:276] 0 containers: []
	W0229 19:01:28.278558   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:28.278564   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:28.278617   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:28.315659   47919 cri.go:89] found id: ""
	I0229 19:01:28.315685   47919 logs.go:276] 0 containers: []
	W0229 19:01:28.315693   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:28.315703   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:28.315716   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:28.330102   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:28.330127   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:28.402474   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:28.402497   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:28.402513   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:28.486271   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:28.486308   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:28.531888   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:28.531918   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:26.083338   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:28.083983   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:30.085481   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:29.684229   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:32.184054   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:30.513642   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:32.514405   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:31.082385   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:31.122771   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:31.122844   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:31.165097   47919 cri.go:89] found id: ""
	I0229 19:01:31.165127   47919 logs.go:276] 0 containers: []
	W0229 19:01:31.165138   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:31.165148   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:31.165215   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:31.209449   47919 cri.go:89] found id: ""
	I0229 19:01:31.209482   47919 logs.go:276] 0 containers: []
	W0229 19:01:31.209492   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:31.209498   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:31.209559   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:31.249660   47919 cri.go:89] found id: ""
	I0229 19:01:31.249687   47919 logs.go:276] 0 containers: []
	W0229 19:01:31.249698   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:31.249705   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:31.249770   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:31.299268   47919 cri.go:89] found id: ""
	I0229 19:01:31.299292   47919 logs.go:276] 0 containers: []
	W0229 19:01:31.299301   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:31.299308   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:31.299363   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:31.339078   47919 cri.go:89] found id: ""
	I0229 19:01:31.339111   47919 logs.go:276] 0 containers: []
	W0229 19:01:31.339123   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:31.339131   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:31.339194   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:31.378548   47919 cri.go:89] found id: ""
	I0229 19:01:31.378576   47919 logs.go:276] 0 containers: []
	W0229 19:01:31.378587   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:31.378595   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:31.378654   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:31.418744   47919 cri.go:89] found id: ""
	I0229 19:01:31.418780   47919 logs.go:276] 0 containers: []
	W0229 19:01:31.418812   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:31.418824   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:31.418889   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:31.464078   47919 cri.go:89] found id: ""
	I0229 19:01:31.464103   47919 logs.go:276] 0 containers: []
	W0229 19:01:31.464113   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:31.464124   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:31.464138   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:31.516406   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:31.516434   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:31.531504   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:31.531527   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:31.607391   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:31.607413   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:31.607426   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:31.691582   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:31.691609   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:34.233205   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:34.250283   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:34.250345   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:34.294588   47919 cri.go:89] found id: ""
	I0229 19:01:34.294620   47919 logs.go:276] 0 containers: []
	W0229 19:01:34.294631   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:34.294639   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:34.294712   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:34.337033   47919 cri.go:89] found id: ""
	I0229 19:01:34.337061   47919 logs.go:276] 0 containers: []
	W0229 19:01:34.337071   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:34.337079   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:34.337141   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:34.382800   47919 cri.go:89] found id: ""
	I0229 19:01:34.382831   47919 logs.go:276] 0 containers: []
	W0229 19:01:34.382840   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:34.382845   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:34.382904   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:34.422931   47919 cri.go:89] found id: ""
	I0229 19:01:34.422959   47919 logs.go:276] 0 containers: []
	W0229 19:01:34.422970   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:34.422977   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:34.423059   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:34.469724   47919 cri.go:89] found id: ""
	I0229 19:01:34.469755   47919 logs.go:276] 0 containers: []
	W0229 19:01:34.469765   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:34.469773   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:34.469824   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:34.513428   47919 cri.go:89] found id: ""
	I0229 19:01:34.513461   47919 logs.go:276] 0 containers: []
	W0229 19:01:34.513472   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:34.513479   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:34.513555   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:34.552593   47919 cri.go:89] found id: ""
	I0229 19:01:34.552638   47919 logs.go:276] 0 containers: []
	W0229 19:01:34.552648   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:34.552655   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:34.552717   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:34.596516   47919 cri.go:89] found id: ""
	I0229 19:01:34.596538   47919 logs.go:276] 0 containers: []
	W0229 19:01:34.596546   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:34.596554   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:34.596568   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:34.611782   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:34.611805   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:34.694333   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:34.694352   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:34.694368   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:34.781638   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:34.781669   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:34.832910   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:34.832943   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:32.584363   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:34.585650   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:34.185025   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:36.683723   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:34.515185   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:37.013287   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:39.014417   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:37.398458   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:37.415617   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:37.415696   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:37.455390   47919 cri.go:89] found id: ""
	I0229 19:01:37.455421   47919 logs.go:276] 0 containers: []
	W0229 19:01:37.455433   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:37.455440   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:37.455501   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:37.498869   47919 cri.go:89] found id: ""
	I0229 19:01:37.498890   47919 logs.go:276] 0 containers: []
	W0229 19:01:37.498901   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:37.498909   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:37.498972   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:37.538928   47919 cri.go:89] found id: ""
	I0229 19:01:37.538952   47919 logs.go:276] 0 containers: []
	W0229 19:01:37.538960   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:37.538966   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:37.539012   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:37.577278   47919 cri.go:89] found id: ""
	I0229 19:01:37.577299   47919 logs.go:276] 0 containers: []
	W0229 19:01:37.577310   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:37.577317   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:37.577372   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:37.620313   47919 cri.go:89] found id: ""
	I0229 19:01:37.620342   47919 logs.go:276] 0 containers: []
	W0229 19:01:37.620352   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:37.620359   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:37.620420   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:37.657696   47919 cri.go:89] found id: ""
	I0229 19:01:37.657717   47919 logs.go:276] 0 containers: []
	W0229 19:01:37.657726   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:37.657734   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:37.657792   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:37.698814   47919 cri.go:89] found id: ""
	I0229 19:01:37.698833   47919 logs.go:276] 0 containers: []
	W0229 19:01:37.698841   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:37.698848   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:37.698902   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:37.736438   47919 cri.go:89] found id: ""
	I0229 19:01:37.736469   47919 logs.go:276] 0 containers: []
	W0229 19:01:37.736480   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:37.736490   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:37.736506   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:37.753849   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:37.753871   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:37.854740   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:37.854764   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:37.854783   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:37.943837   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:37.943872   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:37.988180   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:37.988209   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:37.084353   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:39.582760   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:39.183743   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:41.184218   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:41.014652   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:43.014745   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:40.543133   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:40.558453   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:40.558526   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:40.599794   47919 cri.go:89] found id: ""
	I0229 19:01:40.599814   47919 logs.go:276] 0 containers: []
	W0229 19:01:40.599821   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:40.599827   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:40.599874   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:40.641738   47919 cri.go:89] found id: ""
	I0229 19:01:40.641762   47919 logs.go:276] 0 containers: []
	W0229 19:01:40.641769   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:40.641775   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:40.641819   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:40.683905   47919 cri.go:89] found id: ""
	I0229 19:01:40.683935   47919 logs.go:276] 0 containers: []
	W0229 19:01:40.683945   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:40.683953   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:40.684006   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:40.727645   47919 cri.go:89] found id: ""
	I0229 19:01:40.727675   47919 logs.go:276] 0 containers: []
	W0229 19:01:40.727685   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:40.727693   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:40.727754   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:40.785142   47919 cri.go:89] found id: ""
	I0229 19:01:40.785172   47919 logs.go:276] 0 containers: []
	W0229 19:01:40.785192   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:40.785199   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:40.785252   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:40.854534   47919 cri.go:89] found id: ""
	I0229 19:01:40.854560   47919 logs.go:276] 0 containers: []
	W0229 19:01:40.854571   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:40.854580   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:40.854639   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:40.900823   47919 cri.go:89] found id: ""
	I0229 19:01:40.900851   47919 logs.go:276] 0 containers: []
	W0229 19:01:40.900862   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:40.900869   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:40.900928   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:40.938108   47919 cri.go:89] found id: ""
	I0229 19:01:40.938135   47919 logs.go:276] 0 containers: []
	W0229 19:01:40.938146   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:40.938156   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:40.938171   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:40.987452   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:40.987482   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:41.037388   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:41.037417   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:41.051987   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:41.052015   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:41.126077   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:41.126102   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:41.126116   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:43.715745   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:43.730683   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:43.730755   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:43.790637   47919 cri.go:89] found id: ""
	I0229 19:01:43.790665   47919 logs.go:276] 0 containers: []
	W0229 19:01:43.790676   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:43.790682   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:43.790731   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:43.848237   47919 cri.go:89] found id: ""
	I0229 19:01:43.848263   47919 logs.go:276] 0 containers: []
	W0229 19:01:43.848272   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:43.848277   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:43.848337   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:43.897892   47919 cri.go:89] found id: ""
	I0229 19:01:43.897920   47919 logs.go:276] 0 containers: []
	W0229 19:01:43.897928   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:43.897934   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:43.897989   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:43.936068   47919 cri.go:89] found id: ""
	I0229 19:01:43.936089   47919 logs.go:276] 0 containers: []
	W0229 19:01:43.936097   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:43.936102   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:43.936149   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:43.978636   47919 cri.go:89] found id: ""
	I0229 19:01:43.978670   47919 logs.go:276] 0 containers: []
	W0229 19:01:43.978682   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:43.978689   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:43.978751   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:44.018642   47919 cri.go:89] found id: ""
	I0229 19:01:44.018676   47919 logs.go:276] 0 containers: []
	W0229 19:01:44.018684   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:44.018690   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:44.018737   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:44.056237   47919 cri.go:89] found id: ""
	I0229 19:01:44.056267   47919 logs.go:276] 0 containers: []
	W0229 19:01:44.056278   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:44.056285   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:44.056347   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:44.095489   47919 cri.go:89] found id: ""
	I0229 19:01:44.095522   47919 logs.go:276] 0 containers: []
	W0229 19:01:44.095532   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:44.095543   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:44.095557   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:44.139407   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:44.139433   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:44.189893   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:44.189921   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:44.206426   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:44.206449   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:44.285594   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:44.285621   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:44.285638   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:41.584614   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:44.083599   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:43.185509   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:45.683851   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:47.684064   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:45.015082   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:47.017540   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:46.869271   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:46.885267   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:46.885356   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:46.921696   47919 cri.go:89] found id: ""
	I0229 19:01:46.921718   47919 logs.go:276] 0 containers: []
	W0229 19:01:46.921725   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:46.921731   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:46.921789   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:46.960265   47919 cri.go:89] found id: ""
	I0229 19:01:46.960291   47919 logs.go:276] 0 containers: []
	W0229 19:01:46.960302   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:46.960309   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:46.960367   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:46.998035   47919 cri.go:89] found id: ""
	I0229 19:01:46.998062   47919 logs.go:276] 0 containers: []
	W0229 19:01:46.998070   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:46.998075   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:46.998119   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:47.041563   47919 cri.go:89] found id: ""
	I0229 19:01:47.041586   47919 logs.go:276] 0 containers: []
	W0229 19:01:47.041595   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:47.041600   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:47.041643   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:47.084146   47919 cri.go:89] found id: ""
	I0229 19:01:47.084167   47919 logs.go:276] 0 containers: []
	W0229 19:01:47.084174   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:47.084179   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:47.084227   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:47.126813   47919 cri.go:89] found id: ""
	I0229 19:01:47.126835   47919 logs.go:276] 0 containers: []
	W0229 19:01:47.126845   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:47.126853   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:47.126909   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:47.165379   47919 cri.go:89] found id: ""
	I0229 19:01:47.165399   47919 logs.go:276] 0 containers: []
	W0229 19:01:47.165406   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:47.165412   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:47.165454   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:47.204263   47919 cri.go:89] found id: ""
	I0229 19:01:47.204306   47919 logs.go:276] 0 containers: []
	W0229 19:01:47.204316   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:47.204328   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:47.204345   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:47.248848   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:47.248876   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:47.299388   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:47.299416   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:47.314484   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:47.314507   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:47.386231   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:47.386256   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:47.386272   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:46.084527   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:48.085557   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:50.189188   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:52.684126   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:49.513497   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:51.514191   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:53.515909   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:49.965988   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:49.980621   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:49.980700   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:50.025010   47919 cri.go:89] found id: ""
	I0229 19:01:50.025030   47919 logs.go:276] 0 containers: []
	W0229 19:01:50.025037   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:50.025042   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:50.025090   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:50.066947   47919 cri.go:89] found id: ""
	I0229 19:01:50.066976   47919 logs.go:276] 0 containers: []
	W0229 19:01:50.066984   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:50.066990   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:50.067061   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:50.108892   47919 cri.go:89] found id: ""
	I0229 19:01:50.108913   47919 logs.go:276] 0 containers: []
	W0229 19:01:50.108931   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:50.108937   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:50.108997   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:50.149601   47919 cri.go:89] found id: ""
	I0229 19:01:50.149626   47919 logs.go:276] 0 containers: []
	W0229 19:01:50.149636   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:50.149643   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:50.149704   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:50.191881   47919 cri.go:89] found id: ""
	I0229 19:01:50.191908   47919 logs.go:276] 0 containers: []
	W0229 19:01:50.191918   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:50.191925   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:50.191987   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:50.233782   47919 cri.go:89] found id: ""
	I0229 19:01:50.233803   47919 logs.go:276] 0 containers: []
	W0229 19:01:50.233811   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:50.233816   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:50.233870   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:50.274913   47919 cri.go:89] found id: ""
	I0229 19:01:50.274941   47919 logs.go:276] 0 containers: []
	W0229 19:01:50.274950   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:50.274955   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:50.275050   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:50.321924   47919 cri.go:89] found id: ""
	I0229 19:01:50.321945   47919 logs.go:276] 0 containers: []
	W0229 19:01:50.321953   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:50.321967   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:50.321978   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:50.367357   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:50.367388   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:50.417229   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:50.417260   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:50.432031   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:50.432056   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:50.504920   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:50.504942   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:50.504960   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:53.110884   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:53.126947   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:53.127004   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:53.166940   47919 cri.go:89] found id: ""
	I0229 19:01:53.166965   47919 logs.go:276] 0 containers: []
	W0229 19:01:53.166975   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:53.166982   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:53.167054   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:53.205917   47919 cri.go:89] found id: ""
	I0229 19:01:53.205960   47919 logs.go:276] 0 containers: []
	W0229 19:01:53.205968   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:53.205974   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:53.206030   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:53.245547   47919 cri.go:89] found id: ""
	I0229 19:01:53.245577   47919 logs.go:276] 0 containers: []
	W0229 19:01:53.245587   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:53.245595   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:53.245654   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:53.287513   47919 cri.go:89] found id: ""
	I0229 19:01:53.287540   47919 logs.go:276] 0 containers: []
	W0229 19:01:53.287550   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:53.287557   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:53.287617   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:53.329269   47919 cri.go:89] found id: ""
	I0229 19:01:53.329299   47919 logs.go:276] 0 containers: []
	W0229 19:01:53.329310   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:53.329318   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:53.329379   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:53.377438   47919 cri.go:89] found id: ""
	I0229 19:01:53.377467   47919 logs.go:276] 0 containers: []
	W0229 19:01:53.377478   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:53.377485   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:53.377549   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:53.418414   47919 cri.go:89] found id: ""
	I0229 19:01:53.418440   47919 logs.go:276] 0 containers: []
	W0229 19:01:53.418448   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:53.418453   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:53.418514   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:53.458365   47919 cri.go:89] found id: ""
	I0229 19:01:53.458393   47919 logs.go:276] 0 containers: []
	W0229 19:01:53.458402   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:53.458409   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:53.458421   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:53.540710   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:53.540744   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:53.637271   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:53.637302   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:53.687822   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:53.687850   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:53.703482   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:53.703506   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:53.779564   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:50.584198   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:53.082170   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:55.082683   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:54.685554   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:56.685951   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:56.013441   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:58.016917   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:56.280300   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:56.295210   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:56.295295   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:56.336903   47919 cri.go:89] found id: ""
	I0229 19:01:56.336935   47919 logs.go:276] 0 containers: []
	W0229 19:01:56.336945   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:56.336953   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:56.337002   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:56.373300   47919 cri.go:89] found id: ""
	I0229 19:01:56.373322   47919 logs.go:276] 0 containers: []
	W0229 19:01:56.373330   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:56.373338   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:56.373390   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:56.411949   47919 cri.go:89] found id: ""
	I0229 19:01:56.411975   47919 logs.go:276] 0 containers: []
	W0229 19:01:56.411984   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:56.411990   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:56.412047   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:56.453302   47919 cri.go:89] found id: ""
	I0229 19:01:56.453329   47919 logs.go:276] 0 containers: []
	W0229 19:01:56.453339   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:56.453344   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:56.453403   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:56.490543   47919 cri.go:89] found id: ""
	I0229 19:01:56.490565   47919 logs.go:276] 0 containers: []
	W0229 19:01:56.490576   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:56.490582   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:56.490637   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:56.547078   47919 cri.go:89] found id: ""
	I0229 19:01:56.547101   47919 logs.go:276] 0 containers: []
	W0229 19:01:56.547108   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:56.547113   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:56.547171   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:56.598382   47919 cri.go:89] found id: ""
	I0229 19:01:56.598408   47919 logs.go:276] 0 containers: []
	W0229 19:01:56.598417   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:56.598424   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:56.598478   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:56.646090   47919 cri.go:89] found id: ""
	I0229 19:01:56.646117   47919 logs.go:276] 0 containers: []
	W0229 19:01:56.646125   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:56.646134   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:56.646145   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:56.691685   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:56.691711   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:56.742886   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:56.742927   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:56.758326   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:56.758350   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:56.830140   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:56.830160   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:56.830177   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:59.414437   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:59.429710   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:59.429793   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:59.473993   47919 cri.go:89] found id: ""
	I0229 19:01:59.474018   47919 logs.go:276] 0 containers: []
	W0229 19:01:59.474025   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:59.474031   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:59.474091   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:59.529114   47919 cri.go:89] found id: ""
	I0229 19:01:59.529143   47919 logs.go:276] 0 containers: []
	W0229 19:01:59.529157   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:59.529164   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:59.529222   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:59.596624   47919 cri.go:89] found id: ""
	I0229 19:01:59.596654   47919 logs.go:276] 0 containers: []
	W0229 19:01:59.596665   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:59.596672   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:59.596730   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:59.641088   47919 cri.go:89] found id: ""
	I0229 19:01:59.641118   47919 logs.go:276] 0 containers: []
	W0229 19:01:59.641130   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:59.641138   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:59.641198   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:59.682294   47919 cri.go:89] found id: ""
	I0229 19:01:59.682318   47919 logs.go:276] 0 containers: []
	W0229 19:01:59.682327   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:59.682333   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:59.682406   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:59.722881   47919 cri.go:89] found id: ""
	I0229 19:01:59.722902   47919 logs.go:276] 0 containers: []
	W0229 19:01:59.722910   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:59.722915   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:59.722982   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:59.761727   47919 cri.go:89] found id: ""
	I0229 19:01:59.761757   47919 logs.go:276] 0 containers: []
	W0229 19:01:59.761767   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:59.761778   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:59.761839   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:59.805733   47919 cri.go:89] found id: ""
	I0229 19:01:59.805762   47919 logs.go:276] 0 containers: []
	W0229 19:01:59.805772   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:59.805783   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:59.805798   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:59.883702   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:59.883721   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:59.883733   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:57.083166   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:59.085841   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:59.183892   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:01.184393   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:00.513790   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:03.013807   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:59.960649   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:59.960682   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:02:00.012085   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:02:00.012121   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:02:00.065794   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:02:00.065834   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:02:02.583319   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:02:02.603123   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:02:02.603178   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:02:02.654992   47919 cri.go:89] found id: ""
	I0229 19:02:02.655017   47919 logs.go:276] 0 containers: []
	W0229 19:02:02.655046   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:02:02.655053   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:02:02.655103   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:02:02.697067   47919 cri.go:89] found id: ""
	I0229 19:02:02.697098   47919 logs.go:276] 0 containers: []
	W0229 19:02:02.697109   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:02:02.697116   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:02:02.697178   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:02:02.734804   47919 cri.go:89] found id: ""
	I0229 19:02:02.734828   47919 logs.go:276] 0 containers: []
	W0229 19:02:02.734835   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:02:02.734841   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:02:02.734893   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:02:02.778292   47919 cri.go:89] found id: ""
	I0229 19:02:02.778313   47919 logs.go:276] 0 containers: []
	W0229 19:02:02.778321   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:02:02.778328   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:02:02.778382   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:02:02.819431   47919 cri.go:89] found id: ""
	I0229 19:02:02.819458   47919 logs.go:276] 0 containers: []
	W0229 19:02:02.819470   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:02:02.819478   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:02:02.819537   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:02:02.862409   47919 cri.go:89] found id: ""
	I0229 19:02:02.862432   47919 logs.go:276] 0 containers: []
	W0229 19:02:02.862439   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:02:02.862445   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:02:02.862487   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:02:02.902486   47919 cri.go:89] found id: ""
	I0229 19:02:02.902513   47919 logs.go:276] 0 containers: []
	W0229 19:02:02.902521   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:02:02.902526   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:02:02.902571   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:02:02.939408   47919 cri.go:89] found id: ""
	I0229 19:02:02.939436   47919 logs.go:276] 0 containers: []
	W0229 19:02:02.939443   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:02:02.939451   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:02:02.939462   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:02:02.954539   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:02:02.954564   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:02:03.032534   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:02:03.032556   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:02:03.032574   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:02:03.116064   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:02:03.116096   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:02:03.167242   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:02:03.167265   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:02:01.582557   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:03.583876   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:04.576948   47608 pod_ready.go:81] duration metric: took 4m0.00105469s waiting for pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace to be "Ready" ...
	E0229 19:02:04.576996   47608 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace to be "Ready" (will not retry!)
	I0229 19:02:04.577015   47608 pod_ready.go:38] duration metric: took 4m12.91384632s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 19:02:04.577039   47608 kubeadm.go:640] restartCluster took 4m30.900514081s
	W0229 19:02:04.577101   47608 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0229 19:02:04.577137   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0229 19:02:03.684074   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:05.686050   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:07.686409   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:05.014368   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:07.518556   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:05.718312   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:02:05.732879   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:02:05.733012   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:02:05.774525   47919 cri.go:89] found id: ""
	I0229 19:02:05.774557   47919 logs.go:276] 0 containers: []
	W0229 19:02:05.774569   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:02:05.774577   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:02:05.774640   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:02:05.817870   47919 cri.go:89] found id: ""
	I0229 19:02:05.817900   47919 logs.go:276] 0 containers: []
	W0229 19:02:05.817912   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:02:05.817919   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:02:05.817998   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:02:05.859533   47919 cri.go:89] found id: ""
	I0229 19:02:05.859565   47919 logs.go:276] 0 containers: []
	W0229 19:02:05.859579   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:02:05.859587   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:02:05.859646   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:02:05.904971   47919 cri.go:89] found id: ""
	I0229 19:02:05.905003   47919 logs.go:276] 0 containers: []
	W0229 19:02:05.905014   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:02:05.905021   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:02:05.905086   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:02:05.950431   47919 cri.go:89] found id: ""
	I0229 19:02:05.950459   47919 logs.go:276] 0 containers: []
	W0229 19:02:05.950470   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:02:05.950478   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:02:05.950546   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:02:05.999464   47919 cri.go:89] found id: ""
	I0229 19:02:05.999489   47919 logs.go:276] 0 containers: []
	W0229 19:02:05.999500   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:02:05.999508   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:02:05.999588   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:02:06.045086   47919 cri.go:89] found id: ""
	I0229 19:02:06.045117   47919 logs.go:276] 0 containers: []
	W0229 19:02:06.045133   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:02:06.045140   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:02:06.045203   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:02:06.091542   47919 cri.go:89] found id: ""
	I0229 19:02:06.091571   47919 logs.go:276] 0 containers: []
	W0229 19:02:06.091583   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:02:06.091592   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:02:06.091607   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:02:06.156524   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:02:06.156558   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:02:06.174941   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:02:06.174965   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:02:06.260443   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:02:06.260467   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:02:06.260483   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:02:06.377415   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:02:06.377457   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:02:08.931407   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:02:08.946035   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:02:08.946108   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:02:08.989299   47919 cri.go:89] found id: ""
	I0229 19:02:08.989326   47919 logs.go:276] 0 containers: []
	W0229 19:02:08.989338   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:02:08.989345   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:02:08.989405   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:02:09.033634   47919 cri.go:89] found id: ""
	I0229 19:02:09.033664   47919 logs.go:276] 0 containers: []
	W0229 19:02:09.033677   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:02:09.033684   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:02:09.033745   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:02:09.084381   47919 cri.go:89] found id: ""
	I0229 19:02:09.084406   47919 logs.go:276] 0 containers: []
	W0229 19:02:09.084435   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:02:09.084442   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:02:09.084507   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:02:09.132526   47919 cri.go:89] found id: ""
	I0229 19:02:09.132555   47919 logs.go:276] 0 containers: []
	W0229 19:02:09.132573   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:02:09.132581   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:02:09.132644   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:02:09.182655   47919 cri.go:89] found id: ""
	I0229 19:02:09.182684   47919 logs.go:276] 0 containers: []
	W0229 19:02:09.182694   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:02:09.182701   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:02:09.182764   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:02:09.223164   47919 cri.go:89] found id: ""
	I0229 19:02:09.223191   47919 logs.go:276] 0 containers: []
	W0229 19:02:09.223202   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:02:09.223210   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:02:09.223267   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:02:09.271882   47919 cri.go:89] found id: ""
	I0229 19:02:09.271908   47919 logs.go:276] 0 containers: []
	W0229 19:02:09.271926   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:02:09.271934   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:02:09.271992   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:02:09.331796   47919 cri.go:89] found id: ""
	I0229 19:02:09.331826   47919 logs.go:276] 0 containers: []
	W0229 19:02:09.331837   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:02:09.331847   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:02:09.331860   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:02:09.398969   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:02:09.399009   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:02:09.418992   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:02:09.419040   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:02:09.503358   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:02:09.503381   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:02:09.503394   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:02:09.612549   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:02:09.612586   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:02:10.184741   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:12.685204   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:10.024230   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:12.513343   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:12.162138   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:02:12.175827   47919 kubeadm.go:640] restartCluster took 4m14.562960798s
	W0229 19:02:12.175902   47919 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0229 19:02:12.175940   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0229 19:02:12.639231   47919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 19:02:12.658353   47919 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 19:02:12.671552   47919 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 19:02:12.684278   47919 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 19:02:12.684323   47919 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 19:02:12.903644   47919 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 19:02:15.184189   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:17.184275   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:14.517015   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:17.015195   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:19.184474   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:21.184737   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:19.513735   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:22.016650   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:23.185852   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:25.685744   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:24.516493   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:26.519091   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:29.013740   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:28.184960   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:30.685098   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:31.013808   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:33.514912   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:37.055439   47608 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.47828283s)
	I0229 19:02:37.055501   47608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 19:02:37.077296   47608 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 19:02:37.089984   47608 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 19:02:37.100332   47608 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 19:02:37.100379   47608 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0229 19:02:37.156153   47608 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0229 19:02:37.156243   47608 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 19:02:37.317040   47608 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 19:02:37.317142   47608 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 19:02:37.317220   47608 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 19:02:37.551800   47608 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 19:02:33.184422   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:35.686104   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:37.553918   47608 out.go:204]   - Generating certificates and keys ...
	I0229 19:02:37.554019   47608 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 19:02:37.554099   47608 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 19:02:37.554197   47608 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 19:02:37.554271   47608 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 19:02:37.554545   47608 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 19:02:37.555258   47608 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 19:02:37.555792   47608 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 19:02:37.556150   47608 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 19:02:37.556697   47608 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 19:02:37.557215   47608 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 19:02:37.557744   47608 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 19:02:37.557835   47608 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 19:02:37.725663   47608 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 19:02:37.801114   47608 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 19:02:37.971825   47608 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 19:02:38.081281   47608 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 19:02:38.081986   47608 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 19:02:38.086435   47608 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 19:02:36.013356   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:38.014838   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:38.088264   47608 out.go:204]   - Booting up control plane ...
	I0229 19:02:38.088353   47608 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 19:02:38.088442   47608 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 19:02:38.088533   47608 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 19:02:38.106686   47608 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 19:02:38.107606   47608 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 19:02:38.107671   47608 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0229 19:02:38.264387   47608 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 19:02:38.185682   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:40.684963   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:40.014933   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:42.016282   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:44.768315   47608 kubeadm.go:322] [apiclient] All control plane components are healthy after 6.503831 seconds
	I0229 19:02:44.768482   47608 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0229 19:02:44.786115   47608 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0229 19:02:45.321509   47608 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0229 19:02:45.321785   47608 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-991128 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0229 19:02:45.834905   47608 kubeadm.go:322] [bootstrap-token] Using token: 53x4pg.x71etkalcz6sdqmq
	I0229 19:02:45.836192   47608 out.go:204]   - Configuring RBAC rules ...
	I0229 19:02:45.836319   47608 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0229 19:02:45.843486   47608 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0229 19:02:45.854690   47608 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0229 19:02:45.866571   47608 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0229 19:02:45.870812   47608 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0229 19:02:45.874413   47608 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0229 19:02:45.891377   47608 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0229 19:02:46.190541   47608 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0229 19:02:46.251452   47608 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0229 19:02:46.254418   47608 kubeadm.go:322] 
	I0229 19:02:46.254529   47608 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0229 19:02:46.254552   47608 kubeadm.go:322] 
	I0229 19:02:46.254653   47608 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0229 19:02:46.254663   47608 kubeadm.go:322] 
	I0229 19:02:46.254693   47608 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0229 19:02:46.254777   47608 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0229 19:02:46.254843   47608 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0229 19:02:46.254856   47608 kubeadm.go:322] 
	I0229 19:02:46.254932   47608 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0229 19:02:46.254949   47608 kubeadm.go:322] 
	I0229 19:02:46.255010   47608 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0229 19:02:46.255035   47608 kubeadm.go:322] 
	I0229 19:02:46.255115   47608 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0229 19:02:46.255219   47608 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0229 19:02:46.255288   47608 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0229 19:02:46.255298   47608 kubeadm.go:322] 
	I0229 19:02:46.255366   47608 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0229 19:02:46.255456   47608 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0229 19:02:46.255469   47608 kubeadm.go:322] 
	I0229 19:02:46.255574   47608 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 53x4pg.x71etkalcz6sdqmq \
	I0229 19:02:46.255704   47608 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:aeecb2c9da879b7c90530698bac219eb9b0e3de8de59b6b65ba867f984e81782 \
	I0229 19:02:46.255726   47608 kubeadm.go:322] 	--control-plane 
	I0229 19:02:46.255730   47608 kubeadm.go:322] 
	I0229 19:02:46.255838   47608 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0229 19:02:46.255850   47608 kubeadm.go:322] 
	I0229 19:02:46.255951   47608 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 53x4pg.x71etkalcz6sdqmq \
	I0229 19:02:46.256097   47608 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:aeecb2c9da879b7c90530698bac219eb9b0e3de8de59b6b65ba867f984e81782 
	I0229 19:02:46.261669   47608 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 19:02:46.264240   47608 cni.go:84] Creating CNI manager for ""
	I0229 19:02:46.264255   47608 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 19:02:46.266874   47608 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 19:02:43.185008   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:45.685480   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:44.515334   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:47.014269   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:48.006787   48088 pod_ready.go:81] duration metric: took 4m0.000159724s waiting for pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace to be "Ready" ...
	E0229 19:02:48.006810   48088 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace to be "Ready" (will not retry!)
	I0229 19:02:48.006828   48088 pod_ready.go:38] duration metric: took 4m13.055720974s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 19:02:48.006852   48088 kubeadm.go:640] restartCluster took 4m30.764284147s
	W0229 19:02:48.006932   48088 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0229 19:02:48.006958   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0229 19:02:46.268155   47608 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 19:02:46.302630   47608 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 19:02:46.363238   47608 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 19:02:46.363314   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:46.363332   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f9e09574fdbf0719156a1e892f7aeb8b71f0cf19 minikube.k8s.io/name=embed-certs-991128 minikube.k8s.io/updated_at=2024_02_29T19_02_46_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:46.429324   47608 ops.go:34] apiserver oom_adj: -16
	I0229 19:02:46.736245   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:47.236707   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:47.736427   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:48.236379   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:48.736599   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:49.236640   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:49.736492   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:50.237145   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:48.184252   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:50.185542   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:52.683769   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:50.736510   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:51.236643   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:51.736840   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:52.236378   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:52.736992   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:53.236672   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:53.736958   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:54.236590   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:54.736323   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:55.237218   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:55.184845   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:57.685255   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:55.736774   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:56.236342   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:56.736380   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:57.236930   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:57.737100   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:58.237031   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:58.387963   47608 kubeadm.go:1088] duration metric: took 12.024710189s to wait for elevateKubeSystemPrivileges.
	I0229 19:02:58.388004   47608 kubeadm.go:406] StartCluster complete in 5m24.764885945s
	I0229 19:02:58.388027   47608 settings.go:142] acquiring lock: {Name:mk2120f70b8c0f8e9d58905a579415af500b3723 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:02:58.388120   47608 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18259-6428/kubeconfig
	I0229 19:02:58.390675   47608 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/kubeconfig: {Name:mk7125f243525b7f0feb85371d9c568ed8e0cf7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:02:58.390953   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 19:02:58.391045   47608 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 19:02:58.391123   47608 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-991128"
	I0229 19:02:58.391146   47608 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-991128"
	W0229 19:02:58.391154   47608 addons.go:243] addon storage-provisioner should already be in state true
	I0229 19:02:58.391154   47608 config.go:182] Loaded profile config "embed-certs-991128": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 19:02:58.391203   47608 host.go:66] Checking if "embed-certs-991128" exists ...
	I0229 19:02:58.391204   47608 addons.go:69] Setting default-storageclass=true in profile "embed-certs-991128"
	I0229 19:02:58.391244   47608 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-991128"
	I0229 19:02:58.391596   47608 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:02:58.391624   47608 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:02:58.391698   47608 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:02:58.391718   47608 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:02:58.391204   47608 addons.go:69] Setting metrics-server=true in profile "embed-certs-991128"
	I0229 19:02:58.391948   47608 addons.go:234] Setting addon metrics-server=true in "embed-certs-991128"
	W0229 19:02:58.391957   47608 addons.go:243] addon metrics-server should already be in state true
	I0229 19:02:58.391993   47608 host.go:66] Checking if "embed-certs-991128" exists ...
	I0229 19:02:58.392356   47608 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:02:58.392387   47608 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:02:58.409953   47608 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40911
	I0229 19:02:58.409972   47608 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34647
	I0229 19:02:58.410460   47608 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:02:58.410478   47608 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:02:58.411005   47608 main.go:141] libmachine: Using API Version  1
	I0229 19:02:58.411018   47608 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:02:58.411018   47608 main.go:141] libmachine: Using API Version  1
	I0229 19:02:58.411048   47608 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:02:58.411360   47608 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41753
	I0229 19:02:58.411529   47608 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:02:58.411534   47608 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:02:58.411740   47608 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:02:58.411752   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetState
	I0229 19:02:58.412075   47608 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:02:58.412114   47608 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:02:58.412144   47608 main.go:141] libmachine: Using API Version  1
	I0229 19:02:58.412164   47608 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:02:58.412662   47608 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:02:58.413148   47608 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:02:58.413178   47608 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:02:58.415173   47608 addons.go:234] Setting addon default-storageclass=true in "embed-certs-991128"
	W0229 19:02:58.415195   47608 addons.go:243] addon default-storageclass should already be in state true
	I0229 19:02:58.415222   47608 host.go:66] Checking if "embed-certs-991128" exists ...
	I0229 19:02:58.415608   47608 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:02:58.415638   47608 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:02:58.429891   47608 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42057
	I0229 19:02:58.430108   47608 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37441
	I0229 19:02:58.430343   47608 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:02:58.430782   47608 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:02:58.431278   47608 main.go:141] libmachine: Using API Version  1
	I0229 19:02:58.431299   47608 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:02:58.431355   47608 main.go:141] libmachine: Using API Version  1
	I0229 19:02:58.431369   47608 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:02:58.431662   47608 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:02:58.431720   47608 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:02:58.432048   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetState
	I0229 19:02:58.432471   47608 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:02:58.432497   47608 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:02:58.432570   47608 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46057
	I0229 19:02:58.432926   47608 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:02:58.433593   47608 main.go:141] libmachine: Using API Version  1
	I0229 19:02:58.433611   47608 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:02:58.433700   47608 main.go:141] libmachine: (embed-certs-991128) Calling .DriverName
	I0229 19:02:58.436201   47608 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0229 19:02:58.434375   47608 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:02:58.437531   47608 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0229 19:02:58.437549   47608 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0229 19:02:58.437568   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHHostname
	I0229 19:02:58.436414   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetState
	I0229 19:02:58.440191   47608 main.go:141] libmachine: (embed-certs-991128) Calling .DriverName
	I0229 19:02:58.441799   47608 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 19:02:58.440820   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 19:02:58.441382   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHPort
	I0229 19:02:58.443189   47608 main.go:141] libmachine: (embed-certs-991128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:76:e2", ip: ""} in network mk-embed-certs-991128: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:29 +0000 UTC Type:0 Mac:52:54:00:44:76:e2 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:embed-certs-991128 Clientid:01:52:54:00:44:76:e2}
	I0229 19:02:58.443204   47608 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 19:02:58.443216   47608 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 19:02:58.443228   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHHostname
	I0229 19:02:58.443226   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined IP address 192.168.61.34 and MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 19:02:58.443288   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHKeyPath
	I0229 19:02:58.443399   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHUsername
	I0229 19:02:58.443538   47608 sshutil.go:53] new ssh client: &{IP:192.168.61.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/embed-certs-991128/id_rsa Username:docker}
	I0229 19:02:58.446253   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 19:02:58.446573   47608 main.go:141] libmachine: (embed-certs-991128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:76:e2", ip: ""} in network mk-embed-certs-991128: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:29 +0000 UTC Type:0 Mac:52:54:00:44:76:e2 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:embed-certs-991128 Clientid:01:52:54:00:44:76:e2}
	I0229 19:02:58.446840   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined IP address 192.168.61.34 and MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 19:02:58.446885   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHPort
	I0229 19:02:58.447103   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHKeyPath
	I0229 19:02:58.447250   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHUsername
	I0229 19:02:58.447399   47608 sshutil.go:53] new ssh client: &{IP:192.168.61.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/embed-certs-991128/id_rsa Username:docker}
	I0229 19:02:58.449854   47608 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41629
	I0229 19:02:58.450308   47608 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:02:58.450842   47608 main.go:141] libmachine: Using API Version  1
	I0229 19:02:58.450862   47608 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:02:58.451215   47608 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:02:58.452123   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetState
	I0229 19:02:58.453574   47608 main.go:141] libmachine: (embed-certs-991128) Calling .DriverName
	I0229 19:02:58.453805   47608 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 19:02:58.453822   47608 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 19:02:58.453836   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHHostname
	I0229 19:02:58.456718   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 19:02:58.457141   47608 main.go:141] libmachine: (embed-certs-991128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:76:e2", ip: ""} in network mk-embed-certs-991128: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:29 +0000 UTC Type:0 Mac:52:54:00:44:76:e2 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:embed-certs-991128 Clientid:01:52:54:00:44:76:e2}
	I0229 19:02:58.457198   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined IP address 192.168.61.34 and MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 19:02:58.457301   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHPort
	I0229 19:02:58.457891   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHKeyPath
	I0229 19:02:58.458055   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHUsername
	I0229 19:02:58.458208   47608 sshutil.go:53] new ssh client: &{IP:192.168.61.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/embed-certs-991128/id_rsa Username:docker}
	I0229 19:02:58.622646   47608 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 19:02:58.666581   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0229 19:02:58.680294   47608 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0229 19:02:58.680319   47608 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0229 19:02:58.701182   47608 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 19:02:58.826426   47608 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0229 19:02:58.826454   47608 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0229 19:02:58.896074   47608 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-991128" context rescaled to 1 replicas
	I0229 19:02:58.896112   47608 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.34 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0229 19:02:58.897987   47608 out.go:177] * Verifying Kubernetes components...
	I0229 19:02:58.899307   47608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 19:02:58.943695   47608 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 19:02:58.943719   47608 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0229 19:02:59.111473   47608 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 19:03:00.514730   47608 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.892048484s)
	I0229 19:03:00.514786   47608 main.go:141] libmachine: Making call to close driver server
	I0229 19:03:00.514797   47608 main.go:141] libmachine: (embed-certs-991128) Calling .Close
	I0229 19:03:00.515119   47608 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:03:00.515140   47608 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:03:00.515155   47608 main.go:141] libmachine: Making call to close driver server
	I0229 19:03:00.515151   47608 main.go:141] libmachine: (embed-certs-991128) DBG | Closing plugin on server side
	I0229 19:03:00.515163   47608 main.go:141] libmachine: (embed-certs-991128) Calling .Close
	I0229 19:03:00.515407   47608 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:03:00.515422   47608 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:03:00.525724   47608 main.go:141] libmachine: Making call to close driver server
	I0229 19:03:00.525747   47608 main.go:141] libmachine: (embed-certs-991128) Calling .Close
	I0229 19:03:00.526016   47608 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:03:00.526034   47608 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:03:00.526058   47608 main.go:141] libmachine: (embed-certs-991128) DBG | Closing plugin on server side
	I0229 19:03:00.549463   47608 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.882844212s)
	I0229 19:03:00.549496   47608 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0229 19:03:01.032296   47608 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.331073482s)
	I0229 19:03:01.032299   47608 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.132962021s)
	I0229 19:03:01.032378   47608 node_ready.go:35] waiting up to 6m0s for node "embed-certs-991128" to be "Ready" ...
	I0229 19:03:01.032351   47608 main.go:141] libmachine: Making call to close driver server
	I0229 19:03:01.032449   47608 main.go:141] libmachine: (embed-certs-991128) Calling .Close
	I0229 19:03:01.032776   47608 main.go:141] libmachine: (embed-certs-991128) DBG | Closing plugin on server side
	I0229 19:03:01.032863   47608 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:03:01.032884   47608 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:03:01.032912   47608 main.go:141] libmachine: Making call to close driver server
	I0229 19:03:01.032929   47608 main.go:141] libmachine: (embed-certs-991128) Calling .Close
	I0229 19:03:01.033250   47608 main.go:141] libmachine: (embed-certs-991128) DBG | Closing plugin on server side
	I0229 19:03:01.033294   47608 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:03:01.033313   47608 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:03:01.054533   47608 node_ready.go:49] node "embed-certs-991128" has status "Ready":"True"
	I0229 19:03:01.054561   47608 node_ready.go:38] duration metric: took 22.162376ms waiting for node "embed-certs-991128" to be "Ready" ...
	I0229 19:03:01.054574   47608 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 19:03:01.073737   47608 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.962221621s)
	I0229 19:03:01.073792   47608 main.go:141] libmachine: Making call to close driver server
	I0229 19:03:01.073807   47608 main.go:141] libmachine: (embed-certs-991128) Calling .Close
	I0229 19:03:01.074112   47608 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:03:01.074134   47608 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:03:01.074144   47608 main.go:141] libmachine: Making call to close driver server
	I0229 19:03:01.074152   47608 main.go:141] libmachine: (embed-certs-991128) Calling .Close
	I0229 19:03:01.074378   47608 main.go:141] libmachine: (embed-certs-991128) DBG | Closing plugin on server side
	I0229 19:03:01.074414   47608 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:03:01.074423   47608 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:03:01.074438   47608 addons.go:470] Verifying addon metrics-server=true in "embed-certs-991128"
	I0229 19:03:01.076668   47608 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0229 19:03:00.186003   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:02.684214   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:01.077896   47608 addons.go:505] enable addons completed in 2.686848059s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0229 19:03:01.090039   47608 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-nth8z" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:01.101161   47608 pod_ready.go:92] pod "coredns-5dd5756b68-nth8z" in "kube-system" namespace has status "Ready":"True"
	I0229 19:03:01.101188   47608 pod_ready.go:81] duration metric: took 11.117889ms waiting for pod "coredns-5dd5756b68-nth8z" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:01.101200   47608 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-991128" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:01.106035   47608 pod_ready.go:92] pod "etcd-embed-certs-991128" in "kube-system" namespace has status "Ready":"True"
	I0229 19:03:01.106059   47608 pod_ready.go:81] duration metric: took 4.853039ms waiting for pod "etcd-embed-certs-991128" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:01.106069   47608 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-991128" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:01.112716   47608 pod_ready.go:92] pod "kube-apiserver-embed-certs-991128" in "kube-system" namespace has status "Ready":"True"
	I0229 19:03:01.112741   47608 pod_ready.go:81] duration metric: took 6.663364ms waiting for pod "kube-apiserver-embed-certs-991128" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:01.112753   47608 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-991128" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:01.117682   47608 pod_ready.go:92] pod "kube-controller-manager-embed-certs-991128" in "kube-system" namespace has status "Ready":"True"
	I0229 19:03:01.117712   47608 pod_ready.go:81] duration metric: took 4.950508ms waiting for pod "kube-controller-manager-embed-certs-991128" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:01.117723   47608 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5grst" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:01.449759   47608 pod_ready.go:92] pod "kube-proxy-5grst" in "kube-system" namespace has status "Ready":"True"
	I0229 19:03:01.449780   47608 pod_ready.go:81] duration metric: took 332.0508ms waiting for pod "kube-proxy-5grst" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:01.449789   47608 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-991128" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:01.837609   47608 pod_ready.go:92] pod "kube-scheduler-embed-certs-991128" in "kube-system" namespace has status "Ready":"True"
	I0229 19:03:01.837633   47608 pod_ready.go:81] duration metric: took 387.837788ms waiting for pod "kube-scheduler-embed-certs-991128" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:01.837641   47608 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:03.844755   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:05.183456   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:07.184892   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:05.844890   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:07.845609   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:09.185625   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:11.683928   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:10.345767   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:12.346373   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:14.844773   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:13.684321   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:16.184064   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:16.845609   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:19.346873   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:18.185564   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:20.685386   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:20.199795   48088 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.19281949s)
	I0229 19:03:20.199858   48088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 19:03:20.217490   48088 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 19:03:20.230760   48088 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 19:03:20.243524   48088 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 19:03:20.243561   48088 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0229 19:03:20.456117   48088 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 19:03:21.845081   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:23.845701   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:23.184306   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:25.185094   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:25.677354   47515 pod_ready.go:81] duration metric: took 4m0.000327645s waiting for pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace to be "Ready" ...
	E0229 19:03:25.677385   47515 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace to be "Ready" (will not retry!)
	I0229 19:03:25.677415   47515 pod_ready.go:38] duration metric: took 4m14.05174509s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 19:03:25.677440   47515 kubeadm.go:640] restartCluster took 4m31.88709285s
	W0229 19:03:25.677495   47515 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0229 19:03:25.677520   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0229 19:03:29.090699   48088 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0229 19:03:29.090795   48088 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 19:03:29.090912   48088 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 19:03:29.091058   48088 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 19:03:29.091185   48088 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 19:03:29.091273   48088 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 19:03:29.092712   48088 out.go:204]   - Generating certificates and keys ...
	I0229 19:03:29.092825   48088 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 19:03:29.092914   48088 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 19:03:29.093021   48088 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 19:03:29.093110   48088 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 19:03:29.093199   48088 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 19:03:29.093273   48088 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 19:03:29.093353   48088 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 19:03:29.093430   48088 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 19:03:29.093523   48088 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 19:03:29.093617   48088 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 19:03:29.093668   48088 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 19:03:29.093741   48088 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 19:03:29.093811   48088 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 19:03:29.093880   48088 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 19:03:29.093962   48088 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 19:03:29.094031   48088 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 19:03:29.094133   48088 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 19:03:29.094211   48088 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 19:03:29.095825   48088 out.go:204]   - Booting up control plane ...
	I0229 19:03:29.095939   48088 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 19:03:29.096048   48088 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 19:03:29.096154   48088 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 19:03:29.096322   48088 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 19:03:29.096423   48088 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 19:03:29.096489   48088 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0229 19:03:29.096694   48088 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 19:03:29.096769   48088 kubeadm.go:322] [apiclient] All control plane components are healthy after 6.003591 seconds
	I0229 19:03:29.096853   48088 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0229 19:03:29.096951   48088 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0229 19:03:29.097006   48088 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0229 19:03:29.097158   48088 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-153528 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0229 19:03:29.097202   48088 kubeadm.go:322] [bootstrap-token] Using token: 1l0lv4.q8mu3aeamo8e3253
	I0229 19:03:29.098693   48088 out.go:204]   - Configuring RBAC rules ...
	I0229 19:03:29.098829   48088 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0229 19:03:29.098945   48088 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0229 19:03:29.099166   48088 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0229 19:03:29.099357   48088 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0229 19:03:29.099502   48088 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0229 19:03:29.099613   48088 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0229 19:03:29.099756   48088 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0229 19:03:29.099816   48088 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0229 19:03:29.099874   48088 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0229 19:03:29.099884   48088 kubeadm.go:322] 
	I0229 19:03:29.099961   48088 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0229 19:03:29.099970   48088 kubeadm.go:322] 
	I0229 19:03:29.100060   48088 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0229 19:03:29.100070   48088 kubeadm.go:322] 
	I0229 19:03:29.100100   48088 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0229 19:03:29.100173   48088 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0229 19:03:29.100239   48088 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0229 19:03:29.100252   48088 kubeadm.go:322] 
	I0229 19:03:29.100319   48088 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0229 19:03:29.100329   48088 kubeadm.go:322] 
	I0229 19:03:29.100388   48088 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0229 19:03:29.100398   48088 kubeadm.go:322] 
	I0229 19:03:29.100463   48088 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0229 19:03:29.100559   48088 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0229 19:03:29.100651   48088 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0229 19:03:29.100661   48088 kubeadm.go:322] 
	I0229 19:03:29.100763   48088 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0229 19:03:29.100862   48088 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0229 19:03:29.100877   48088 kubeadm.go:322] 
	I0229 19:03:29.100984   48088 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token 1l0lv4.q8mu3aeamo8e3253 \
	I0229 19:03:29.101114   48088 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:aeecb2c9da879b7c90530698bac219eb9b0e3de8de59b6b65ba867f984e81782 \
	I0229 19:03:29.101143   48088 kubeadm.go:322] 	--control-plane 
	I0229 19:03:29.101152   48088 kubeadm.go:322] 
	I0229 19:03:29.101249   48088 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0229 19:03:29.101258   48088 kubeadm.go:322] 
	I0229 19:03:29.101351   48088 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token 1l0lv4.q8mu3aeamo8e3253 \
	I0229 19:03:29.101473   48088 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:aeecb2c9da879b7c90530698bac219eb9b0e3de8de59b6b65ba867f984e81782 
	I0229 19:03:29.101488   48088 cni.go:84] Creating CNI manager for ""
	I0229 19:03:29.101497   48088 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 19:03:29.103073   48088 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 19:03:29.104219   48088 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 19:03:29.170952   48088 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 19:03:29.239084   48088 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 19:03:29.239154   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:29.239173   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f9e09574fdbf0719156a1e892f7aeb8b71f0cf19 minikube.k8s.io/name=default-k8s-diff-port-153528 minikube.k8s.io/updated_at=2024_02_29T19_03_29_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:25.847505   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:28.346494   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:29.423784   48088 ops.go:34] apiserver oom_adj: -16
	I0229 19:03:29.641150   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:30.141394   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:30.641982   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:31.141220   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:31.642229   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:32.141232   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:32.641372   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:33.141757   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:33.641285   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:34.141462   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:30.346615   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:32.844207   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:34.846669   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:34.641857   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:35.142068   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:35.641289   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:36.142146   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:36.641965   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:37.141335   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:37.641778   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:38.141415   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:38.641267   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:39.141162   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:36.846708   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:39.347339   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:39.642154   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:40.141271   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:40.641433   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:41.141522   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:41.641353   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:41.787617   48088 kubeadm.go:1088] duration metric: took 12.548525295s to wait for elevateKubeSystemPrivileges.
	I0229 19:03:41.787657   48088 kubeadm.go:406] StartCluster complete in 5m24.60631624s
	I0229 19:03:41.787678   48088 settings.go:142] acquiring lock: {Name:mk2120f70b8c0f8e9d58905a579415af500b3723 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:03:41.787771   48088 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18259-6428/kubeconfig
	I0229 19:03:41.789341   48088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/kubeconfig: {Name:mk7125f243525b7f0feb85371d9c568ed8e0cf7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:03:41.789617   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 19:03:41.789716   48088 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 19:03:41.789815   48088 config.go:182] Loaded profile config "default-k8s-diff-port-153528": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 19:03:41.789835   48088 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-153528"
	I0229 19:03:41.789835   48088 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-153528"
	I0229 19:03:41.789856   48088 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-153528"
	I0229 19:03:41.789821   48088 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-153528"
	I0229 19:03:41.789879   48088 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-153528"
	W0229 19:03:41.789890   48088 addons.go:243] addon storage-provisioner should already be in state true
	I0229 19:03:41.789937   48088 host.go:66] Checking if "default-k8s-diff-port-153528" exists ...
	I0229 19:03:41.789861   48088 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-153528"
	W0229 19:03:41.789963   48088 addons.go:243] addon metrics-server should already be in state true
	I0229 19:03:41.790008   48088 host.go:66] Checking if "default-k8s-diff-port-153528" exists ...
	I0229 19:03:41.790304   48088 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:03:41.790312   48088 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:03:41.790332   48088 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:03:41.790338   48088 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:03:41.790374   48088 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:03:41.790417   48088 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:03:41.806924   48088 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36439
	I0229 19:03:41.807115   48088 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38789
	I0229 19:03:41.807481   48088 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:03:41.807671   48088 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:03:41.808017   48088 main.go:141] libmachine: Using API Version  1
	I0229 19:03:41.808036   48088 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:03:41.808178   48088 main.go:141] libmachine: Using API Version  1
	I0229 19:03:41.808194   48088 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:03:41.808251   48088 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45501
	I0229 19:03:41.808377   48088 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:03:41.808613   48088 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:03:41.808953   48088 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:03:41.808999   48088 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:03:41.809113   48088 main.go:141] libmachine: Using API Version  1
	I0229 19:03:41.809136   48088 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:03:41.809418   48088 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:03:41.809604   48088 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:03:41.809789   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetState
	I0229 19:03:41.810683   48088 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:03:41.810718   48088 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:03:41.813030   48088 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-153528"
	W0229 19:03:41.813045   48088 addons.go:243] addon default-storageclass should already be in state true
	I0229 19:03:41.813066   48088 host.go:66] Checking if "default-k8s-diff-port-153528" exists ...
	I0229 19:03:41.813309   48088 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:03:41.813321   48088 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:03:41.824373   48088 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33803
	I0229 19:03:41.824768   48088 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:03:41.825263   48088 main.go:141] libmachine: Using API Version  1
	I0229 19:03:41.825280   48088 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:03:41.825557   48088 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:03:41.825699   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetState
	I0229 19:03:41.827334   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .DriverName
	I0229 19:03:41.828844   48088 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0229 19:03:41.829931   48088 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0229 19:03:41.829943   48088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0229 19:03:41.829968   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHHostname
	I0229 19:03:41.833079   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 19:03:41.833090   48088 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37709
	I0229 19:03:41.833451   48088 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:03:41.833516   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ec:2b", ip: ""} in network mk-default-k8s-diff-port-153528: {Iface:virbr3 ExpiryTime:2024-02-29 19:58:02 +0000 UTC Type:0 Mac:52:54:00:78:ec:2b Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:default-k8s-diff-port-153528 Clientid:01:52:54:00:78:ec:2b}
	I0229 19:03:41.833527   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined IP address 192.168.39.210 and MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 19:03:41.833694   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHPort
	I0229 19:03:41.833895   48088 main.go:141] libmachine: Using API Version  1
	I0229 19:03:41.833913   48088 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37833
	I0229 19:03:41.833917   48088 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:03:41.833982   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHKeyPath
	I0229 19:03:41.834140   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHUsername
	I0229 19:03:41.834272   48088 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/default-k8s-diff-port-153528/id_rsa Username:docker}
	I0229 19:03:41.834795   48088 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:03:41.835272   48088 main.go:141] libmachine: Using API Version  1
	I0229 19:03:41.835293   48088 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:03:41.835298   48088 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:03:41.835675   48088 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:03:41.835791   48088 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:03:41.835798   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetState
	I0229 19:03:41.835827   48088 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:03:41.837394   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .DriverName
	I0229 19:03:41.839349   48088 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 19:03:41.840971   48088 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 19:03:41.840992   48088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 19:03:41.841008   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHHostname
	I0229 19:03:41.844091   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 19:03:41.844475   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ec:2b", ip: ""} in network mk-default-k8s-diff-port-153528: {Iface:virbr3 ExpiryTime:2024-02-29 19:58:02 +0000 UTC Type:0 Mac:52:54:00:78:ec:2b Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:default-k8s-diff-port-153528 Clientid:01:52:54:00:78:ec:2b}
	I0229 19:03:41.844505   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined IP address 192.168.39.210 and MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 19:03:41.844735   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHPort
	I0229 19:03:41.844954   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHKeyPath
	I0229 19:03:41.845143   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHUsername
	I0229 19:03:41.845300   48088 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/default-k8s-diff-port-153528/id_rsa Username:docker}
	I0229 19:03:41.853524   48088 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45631
	I0229 19:03:41.855329   48088 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:03:41.855788   48088 main.go:141] libmachine: Using API Version  1
	I0229 19:03:41.855809   48088 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:03:41.856135   48088 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:03:41.856317   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetState
	I0229 19:03:41.857882   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .DriverName
	I0229 19:03:41.858179   48088 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 19:03:41.858193   48088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 19:03:41.858214   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHHostname
	I0229 19:03:41.861292   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 19:03:41.861640   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ec:2b", ip: ""} in network mk-default-k8s-diff-port-153528: {Iface:virbr3 ExpiryTime:2024-02-29 19:58:02 +0000 UTC Type:0 Mac:52:54:00:78:ec:2b Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:default-k8s-diff-port-153528 Clientid:01:52:54:00:78:ec:2b}
	I0229 19:03:41.861664   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined IP address 192.168.39.210 and MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 19:03:41.861899   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHPort
	I0229 19:03:41.862088   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHKeyPath
	I0229 19:03:41.862241   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHUsername
	I0229 19:03:41.862413   48088 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/default-k8s-diff-port-153528/id_rsa Username:docker}
	I0229 19:03:42.162741   48088 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0229 19:03:42.162760   48088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0229 19:03:42.164559   48088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 19:03:42.185784   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0229 19:03:42.225413   48088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 19:03:42.283759   48088 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0229 19:03:42.283792   48088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0229 19:03:42.296879   48088 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-153528" context rescaled to 1 replicas
	I0229 19:03:42.296912   48088 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.210 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0229 19:03:42.298687   48088 out.go:177] * Verifying Kubernetes components...
	I0229 19:03:42.300011   48088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 19:03:42.478347   48088 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 19:03:42.478370   48088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0229 19:03:42.626185   48088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 19:03:44.654846   48088 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.469026575s)
	I0229 19:03:44.654876   48088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.429431888s)
	I0229 19:03:44.654891   48088 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0229 19:03:44.654927   48088 main.go:141] libmachine: Making call to close driver server
	I0229 19:03:44.654937   48088 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.354896537s)
	I0229 19:03:44.654987   48088 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-153528" to be "Ready" ...
	I0229 19:03:44.654942   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .Close
	I0229 19:03:44.655090   48088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.490505268s)
	I0229 19:03:44.655115   48088 main.go:141] libmachine: Making call to close driver server
	I0229 19:03:44.655125   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .Close
	I0229 19:03:44.655326   48088 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:03:44.655344   48088 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:03:44.655346   48088 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:03:44.655345   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | Closing plugin on server side
	I0229 19:03:44.655354   48088 main.go:141] libmachine: Making call to close driver server
	I0229 19:03:44.655357   48088 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:03:44.655363   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .Close
	I0229 19:03:44.655370   48088 main.go:141] libmachine: Making call to close driver server
	I0229 19:03:44.655379   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .Close
	I0229 19:03:44.655562   48088 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:03:44.655604   48088 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:03:44.655579   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | Closing plugin on server side
	I0229 19:03:44.655662   48088 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:03:44.655821   48088 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:03:44.655659   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | Closing plugin on server side
	I0229 19:03:44.659331   48088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.033110492s)
	I0229 19:03:44.659381   48088 main.go:141] libmachine: Making call to close driver server
	I0229 19:03:44.659393   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .Close
	I0229 19:03:44.659652   48088 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:03:44.659667   48088 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:03:44.659675   48088 main.go:141] libmachine: Making call to close driver server
	I0229 19:03:44.659683   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .Close
	I0229 19:03:44.659685   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | Closing plugin on server side
	I0229 19:03:44.659902   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | Closing plugin on server side
	I0229 19:03:44.659939   48088 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:03:44.659950   48088 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:03:44.659960   48088 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-153528"
	I0229 19:03:44.683397   48088 node_ready.go:49] node "default-k8s-diff-port-153528" has status "Ready":"True"
	I0229 19:03:44.683417   48088 node_ready.go:38] duration metric: took 28.415374ms waiting for node "default-k8s-diff-port-153528" to be "Ready" ...
	I0229 19:03:44.683427   48088 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 19:03:44.685811   48088 main.go:141] libmachine: Making call to close driver server
	I0229 19:03:44.685831   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .Close
	I0229 19:03:44.686088   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | Closing plugin on server side
	I0229 19:03:44.686110   48088 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:03:44.686122   48088 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:03:44.687970   48088 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0229 19:03:41.849469   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:44.345593   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:44.689232   48088 addons.go:505] enable addons completed in 2.899518009s: enabled=[storage-provisioner metrics-server default-storageclass]
	I0229 19:03:44.693381   48088 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-cgvkv" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:44.720914   48088 pod_ready.go:92] pod "coredns-5dd5756b68-cgvkv" in "kube-system" namespace has status "Ready":"True"
	I0229 19:03:44.720942   48088 pod_ready.go:81] duration metric: took 27.53714ms waiting for pod "coredns-5dd5756b68-cgvkv" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:44.720954   48088 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-fmptg" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:44.729596   48088 pod_ready.go:92] pod "coredns-5dd5756b68-fmptg" in "kube-system" namespace has status "Ready":"True"
	I0229 19:03:44.729618   48088 pod_ready.go:81] duration metric: took 8.655818ms waiting for pod "coredns-5dd5756b68-fmptg" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:44.729628   48088 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-153528" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:44.734112   48088 pod_ready.go:92] pod "etcd-default-k8s-diff-port-153528" in "kube-system" namespace has status "Ready":"True"
	I0229 19:03:44.734130   48088 pod_ready.go:81] duration metric: took 4.494255ms waiting for pod "etcd-default-k8s-diff-port-153528" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:44.734137   48088 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-153528" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:44.738843   48088 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-153528" in "kube-system" namespace has status "Ready":"True"
	I0229 19:03:44.738860   48088 pod_ready.go:81] duration metric: took 4.717537ms waiting for pod "kube-apiserver-default-k8s-diff-port-153528" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:44.738868   48088 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-153528" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:45.059153   48088 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-153528" in "kube-system" namespace has status "Ready":"True"
	I0229 19:03:45.059174   48088 pod_ready.go:81] duration metric: took 320.300485ms waiting for pod "kube-controller-manager-default-k8s-diff-port-153528" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:45.059183   48088 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bvrxx" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:45.465590   48088 pod_ready.go:92] pod "kube-proxy-bvrxx" in "kube-system" namespace has status "Ready":"True"
	I0229 19:03:45.465616   48088 pod_ready.go:81] duration metric: took 406.426237ms waiting for pod "kube-proxy-bvrxx" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:45.465630   48088 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-153528" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:45.858390   48088 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-153528" in "kube-system" namespace has status "Ready":"True"
	I0229 19:03:45.858413   48088 pod_ready.go:81] duration metric: took 392.775547ms waiting for pod "kube-scheduler-default-k8s-diff-port-153528" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:45.858426   48088 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:47.866057   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:46.848336   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:49.344899   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:49.866128   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:51.871764   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:51.346608   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:53.846506   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:58.394324   47515 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.716776929s)
	I0229 19:03:58.394415   47515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 19:03:58.411946   47515 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 19:03:58.422778   47515 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 19:03:58.432981   47515 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 19:03:58.433029   47515 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0229 19:03:58.497643   47515 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.2
	I0229 19:03:58.497784   47515 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 19:03:58.673058   47515 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 19:03:58.673181   47515 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 19:03:58.673291   47515 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 19:03:58.915681   47515 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 19:03:54.366316   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:56.866740   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:58.867746   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:58.917365   47515 out.go:204]   - Generating certificates and keys ...
	I0229 19:03:58.917468   47515 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 19:03:58.917556   47515 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 19:03:58.917657   47515 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 19:03:58.917758   47515 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 19:03:58.917857   47515 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 19:03:58.917933   47515 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 19:03:58.918117   47515 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 19:03:58.918699   47515 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 19:03:58.919679   47515 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 19:03:58.920578   47515 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 19:03:58.921424   47515 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 19:03:58.921738   47515 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 19:03:59.066887   47515 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 19:03:59.215266   47515 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0229 19:03:59.404270   47515 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 19:03:59.514467   47515 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 19:03:59.615483   47515 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 19:03:59.616256   47515 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 19:03:59.619177   47515 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 19:03:55.850264   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:58.346720   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:59.620798   47515 out.go:204]   - Booting up control plane ...
	I0229 19:03:59.620910   47515 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 19:03:59.621009   47515 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 19:03:59.621758   47515 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 19:03:59.648331   47515 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 19:03:59.649070   47515 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 19:03:59.649141   47515 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0229 19:03:59.796018   47515 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 19:04:00.868393   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:03.366167   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:00.848016   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:03.347491   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:05.801078   47515 kubeadm.go:322] [apiclient] All control plane components are healthy after 6.003292 seconds
	I0229 19:04:05.820231   47515 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0229 19:04:05.842846   47515 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0229 19:04:06.388308   47515 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0229 19:04:06.388598   47515 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-247197 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0229 19:04:06.905903   47515 kubeadm.go:322] [bootstrap-token] Using token: 42vs85.s8nvx0pxc27k9bgo
	I0229 19:04:06.907650   47515 out.go:204]   - Configuring RBAC rules ...
	I0229 19:04:06.907813   47515 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0229 19:04:06.913716   47515 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0229 19:04:06.925730   47515 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0229 19:04:06.929319   47515 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0229 19:04:06.933110   47515 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0229 19:04:06.938550   47515 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0229 19:04:06.956559   47515 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0229 19:04:07.216913   47515 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0229 19:04:07.320534   47515 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0229 19:04:07.321455   47515 kubeadm.go:322] 
	I0229 19:04:07.321548   47515 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0229 19:04:07.321578   47515 kubeadm.go:322] 
	I0229 19:04:07.321696   47515 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0229 19:04:07.321710   47515 kubeadm.go:322] 
	I0229 19:04:07.321752   47515 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0229 19:04:07.321848   47515 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0229 19:04:07.321914   47515 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0229 19:04:07.321929   47515 kubeadm.go:322] 
	I0229 19:04:07.322021   47515 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0229 19:04:07.322032   47515 kubeadm.go:322] 
	I0229 19:04:07.322099   47515 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0229 19:04:07.322111   47515 kubeadm.go:322] 
	I0229 19:04:07.322182   47515 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0229 19:04:07.322304   47515 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0229 19:04:07.322404   47515 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0229 19:04:07.322416   47515 kubeadm.go:322] 
	I0229 19:04:07.322559   47515 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0229 19:04:07.322679   47515 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0229 19:04:07.322704   47515 kubeadm.go:322] 
	I0229 19:04:07.322808   47515 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 42vs85.s8nvx0pxc27k9bgo \
	I0229 19:04:07.322922   47515 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:aeecb2c9da879b7c90530698bac219eb9b0e3de8de59b6b65ba867f984e81782 \
	I0229 19:04:07.322956   47515 kubeadm.go:322] 	--control-plane 
	I0229 19:04:07.322964   47515 kubeadm.go:322] 
	I0229 19:04:07.323090   47515 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0229 19:04:07.323103   47515 kubeadm.go:322] 
	I0229 19:04:07.323230   47515 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 42vs85.s8nvx0pxc27k9bgo \
	I0229 19:04:07.323408   47515 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:aeecb2c9da879b7c90530698bac219eb9b0e3de8de59b6b65ba867f984e81782 
	I0229 19:04:07.323921   47515 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 19:04:07.323961   47515 cni.go:84] Creating CNI manager for ""
	I0229 19:04:07.323975   47515 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 19:04:07.325925   47515 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 19:04:07.327319   47515 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 19:04:07.387016   47515 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 19:04:07.434438   47515 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 19:04:07.434538   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:07.434554   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f9e09574fdbf0719156a1e892f7aeb8b71f0cf19 minikube.k8s.io/name=no-preload-247197 minikube.k8s.io/updated_at=2024_02_29T19_04_07_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:07.752182   47515 ops.go:34] apiserver oom_adj: -16
	I0229 19:04:07.752320   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:08.955017   47919 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 19:04:08.955134   47919 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 19:04:08.956493   47919 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 19:04:08.956586   47919 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 19:04:08.956684   47919 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 19:04:08.956809   47919 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 19:04:08.956955   47919 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 19:04:08.957116   47919 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 19:04:08.957253   47919 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 19:04:08.957304   47919 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 19:04:08.957375   47919 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 19:04:08.959231   47919 out.go:204]   - Generating certificates and keys ...
	I0229 19:04:08.959317   47919 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 19:04:08.959429   47919 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 19:04:08.959550   47919 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 19:04:08.959637   47919 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 19:04:08.959745   47919 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 19:04:08.959792   47919 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 19:04:08.959851   47919 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 19:04:08.959934   47919 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 19:04:08.960022   47919 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 19:04:08.960099   47919 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 19:04:08.960159   47919 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 19:04:08.960227   47919 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 19:04:08.960303   47919 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 19:04:08.960349   47919 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 19:04:08.960403   47919 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 19:04:08.960462   47919 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 19:04:08.960540   47919 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 19:04:05.369713   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:07.871542   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:08.962078   47919 out.go:204]   - Booting up control plane ...
	I0229 19:04:08.962181   47919 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 19:04:08.962279   47919 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 19:04:08.962361   47919 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 19:04:08.962470   47919 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 19:04:08.962646   47919 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 19:04:08.962689   47919 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 19:04:08.962777   47919 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 19:04:08.962968   47919 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 19:04:08.963056   47919 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 19:04:08.963331   47919 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 19:04:08.963436   47919 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 19:04:08.963646   47919 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 19:04:08.963761   47919 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 19:04:08.963949   47919 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 19:04:08.964053   47919 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 19:04:08.964273   47919 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 19:04:08.964281   47919 kubeadm.go:322] 
	I0229 19:04:08.964313   47919 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 19:04:08.964351   47919 kubeadm.go:322] 	timed out waiting for the condition
	I0229 19:04:08.964358   47919 kubeadm.go:322] 
	I0229 19:04:08.964385   47919 kubeadm.go:322] This error is likely caused by:
	I0229 19:04:08.964441   47919 kubeadm.go:322] 	- The kubelet is not running
	I0229 19:04:08.964547   47919 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 19:04:08.964560   47919 kubeadm.go:322] 
	I0229 19:04:08.964684   47919 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 19:04:08.964734   47919 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 19:04:08.964780   47919 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 19:04:08.964789   47919 kubeadm.go:322] 
	I0229 19:04:08.964922   47919 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 19:04:08.965053   47919 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 19:04:08.965180   47919 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 19:04:08.965255   47919 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 19:04:08.965342   47919 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 19:04:08.965438   47919 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	W0229 19:04:08.965475   47919 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0229 19:04:08.965520   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0229 19:04:09.441915   47919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 19:04:09.459807   47919 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 19:04:09.471061   47919 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 19:04:09.471099   47919 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 19:04:09.532830   47919 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 19:04:09.532979   47919 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 19:04:09.673720   47919 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 19:04:09.673884   47919 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 19:04:09.674071   47919 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 19:04:09.905201   47919 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 19:04:09.906612   47919 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 19:04:09.915393   47919 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 19:04:10.035443   47919 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 19:04:05.845532   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:07.846899   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:09.847708   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:10.037103   47919 out.go:204]   - Generating certificates and keys ...
	I0229 19:04:10.037203   47919 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 19:04:10.037335   47919 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 19:04:10.037453   47919 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 19:04:10.037558   47919 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 19:04:10.037689   47919 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 19:04:10.037832   47919 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 19:04:10.038465   47919 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 19:04:10.038932   47919 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 19:04:10.039471   47919 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 19:04:10.039874   47919 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 19:04:10.039961   47919 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 19:04:10.040045   47919 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 19:04:10.157741   47919 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 19:04:10.426271   47919 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 19:04:10.528768   47919 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 19:04:10.595099   47919 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 19:04:10.596020   47919 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 19:04:08.252779   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:08.753332   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:09.252867   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:09.752631   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:10.253281   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:10.753138   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:11.253104   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:11.752894   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:12.253271   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:12.753046   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:10.367912   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:12.870689   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:10.597781   47919 out.go:204]   - Booting up control plane ...
	I0229 19:04:10.597872   47919 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 19:04:10.602307   47919 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 19:04:10.603371   47919 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 19:04:10.604660   47919 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 19:04:10.607876   47919 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 19:04:12.346304   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:14.346555   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:13.252668   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:13.752660   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:14.252803   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:14.752360   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:15.252343   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:15.752568   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:16.252484   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:16.752977   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:17.253148   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:17.753112   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:15.366706   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:17.867839   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:18.253109   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:18.753221   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:19.253179   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:19.752851   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:19.875013   47515 kubeadm.go:1088] duration metric: took 12.44055176s to wait for elevateKubeSystemPrivileges.
	I0229 19:04:19.875056   47515 kubeadm.go:406] StartCluster complete in 5m26.137187745s
	I0229 19:04:19.875078   47515 settings.go:142] acquiring lock: {Name:mk2120f70b8c0f8e9d58905a579415af500b3723 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:04:19.875156   47515 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18259-6428/kubeconfig
	I0229 19:04:19.876716   47515 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/kubeconfig: {Name:mk7125f243525b7f0feb85371d9c568ed8e0cf7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:04:19.876957   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 19:04:19.877116   47515 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 19:04:19.877196   47515 addons.go:69] Setting storage-provisioner=true in profile "no-preload-247197"
	I0229 19:04:19.877207   47515 config.go:182] Loaded profile config "no-preload-247197": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0229 19:04:19.877222   47515 addons.go:69] Setting metrics-server=true in profile "no-preload-247197"
	I0229 19:04:19.877208   47515 addons.go:69] Setting default-storageclass=true in profile "no-preload-247197"
	I0229 19:04:19.877269   47515 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-247197"
	I0229 19:04:19.877213   47515 addons.go:234] Setting addon storage-provisioner=true in "no-preload-247197"
	W0229 19:04:19.877372   47515 addons.go:243] addon storage-provisioner should already be in state true
	I0229 19:04:19.877412   47515 host.go:66] Checking if "no-preload-247197" exists ...
	I0229 19:04:19.877244   47515 addons.go:234] Setting addon metrics-server=true in "no-preload-247197"
	W0229 19:04:19.877465   47515 addons.go:243] addon metrics-server should already be in state true
	I0229 19:04:19.877519   47515 host.go:66] Checking if "no-preload-247197" exists ...
	I0229 19:04:19.877697   47515 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:04:19.877734   47515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:04:19.877787   47515 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:04:19.877822   47515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:04:19.877875   47515 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:04:19.877905   47515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:04:19.895578   47515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37865
	I0229 19:04:19.896005   47515 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:04:19.896491   47515 main.go:141] libmachine: Using API Version  1
	I0229 19:04:19.896516   47515 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:04:19.897033   47515 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:04:19.897628   47515 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:04:19.897677   47515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:04:19.897705   47515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35601
	I0229 19:04:19.897711   47515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37627
	I0229 19:04:19.898072   47515 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:04:19.898171   47515 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:04:19.898512   47515 main.go:141] libmachine: Using API Version  1
	I0229 19:04:19.898533   47515 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:04:19.898653   47515 main.go:141] libmachine: Using API Version  1
	I0229 19:04:19.898674   47515 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:04:19.898854   47515 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:04:19.899002   47515 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:04:19.899159   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetState
	I0229 19:04:19.899386   47515 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:04:19.899433   47515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:04:19.902917   47515 addons.go:234] Setting addon default-storageclass=true in "no-preload-247197"
	W0229 19:04:19.902937   47515 addons.go:243] addon default-storageclass should already be in state true
	I0229 19:04:19.902965   47515 host.go:66] Checking if "no-preload-247197" exists ...
	I0229 19:04:19.903374   47515 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:04:19.903492   47515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:04:19.915592   47515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45771
	I0229 19:04:19.916152   47515 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:04:19.916347   47515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46249
	I0229 19:04:19.916677   47515 main.go:141] libmachine: Using API Version  1
	I0229 19:04:19.916694   47515 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:04:19.916799   47515 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:04:19.917168   47515 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:04:19.917302   47515 main.go:141] libmachine: Using API Version  1
	I0229 19:04:19.917314   47515 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:04:19.917505   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetState
	I0229 19:04:19.918075   47515 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:04:19.918253   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetState
	I0229 19:04:19.918351   47515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39265
	I0229 19:04:19.918773   47515 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:04:19.919153   47515 main.go:141] libmachine: Using API Version  1
	I0229 19:04:19.919170   47515 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:04:19.919631   47515 main.go:141] libmachine: (no-preload-247197) Calling .DriverName
	I0229 19:04:19.919999   47515 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:04:19.922165   47515 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0229 19:04:19.920215   47515 main.go:141] libmachine: (no-preload-247197) Calling .DriverName
	I0229 19:04:19.920473   47515 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:04:19.923441   47515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:04:19.923454   47515 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0229 19:04:19.923466   47515 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0229 19:04:19.923481   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHHostname
	I0229 19:04:19.924990   47515 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 19:04:16.845870   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:18.845928   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:19.926366   47515 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 19:04:19.926372   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 19:04:19.926384   47515 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 19:04:19.926402   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHHostname
	I0229 19:04:19.926728   47515 main.go:141] libmachine: (no-preload-247197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2f:53", ip: ""} in network mk-no-preload-247197: {Iface:virbr2 ExpiryTime:2024-02-29 19:58:22 +0000 UTC Type:0 Mac:52:54:00:2c:2f:53 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:no-preload-247197 Clientid:01:52:54:00:2c:2f:53}
	I0229 19:04:19.926752   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined IP address 192.168.50.72 and MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 19:04:19.926908   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHPort
	I0229 19:04:19.927072   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHKeyPath
	I0229 19:04:19.927216   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHUsername
	I0229 19:04:19.927357   47515 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/no-preload-247197/id_rsa Username:docker}
	I0229 19:04:19.929366   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 19:04:19.929709   47515 main.go:141] libmachine: (no-preload-247197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2f:53", ip: ""} in network mk-no-preload-247197: {Iface:virbr2 ExpiryTime:2024-02-29 19:58:22 +0000 UTC Type:0 Mac:52:54:00:2c:2f:53 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:no-preload-247197 Clientid:01:52:54:00:2c:2f:53}
	I0229 19:04:19.929728   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined IP address 192.168.50.72 and MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 19:04:19.929855   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHPort
	I0229 19:04:19.930000   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHKeyPath
	I0229 19:04:19.930090   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHUsername
	I0229 19:04:19.930171   47515 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/no-preload-247197/id_rsa Username:docker}
	I0229 19:04:19.940292   47515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45171
	I0229 19:04:19.940856   47515 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:04:19.941327   47515 main.go:141] libmachine: Using API Version  1
	I0229 19:04:19.941354   47515 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:04:19.941647   47515 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:04:19.941817   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetState
	I0229 19:04:19.943378   47515 main.go:141] libmachine: (no-preload-247197) Calling .DriverName
	I0229 19:04:19.943608   47515 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 19:04:19.943624   47515 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 19:04:19.943640   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHHostname
	I0229 19:04:19.946715   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 19:04:19.947112   47515 main.go:141] libmachine: (no-preload-247197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2f:53", ip: ""} in network mk-no-preload-247197: {Iface:virbr2 ExpiryTime:2024-02-29 19:58:22 +0000 UTC Type:0 Mac:52:54:00:2c:2f:53 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:no-preload-247197 Clientid:01:52:54:00:2c:2f:53}
	I0229 19:04:19.947132   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined IP address 192.168.50.72 and MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 19:04:19.947413   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHPort
	I0229 19:04:19.947546   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHKeyPath
	I0229 19:04:19.947672   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHUsername
	I0229 19:04:19.947795   47515 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/no-preload-247197/id_rsa Username:docker}
	I0229 19:04:20.159078   47515 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 19:04:20.246059   47515 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0229 19:04:20.246085   47515 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0229 19:04:20.338238   47515 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0229 19:04:20.338261   47515 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0229 19:04:20.365954   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0229 19:04:20.383186   47515 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-247197" context rescaled to 1 replicas
	I0229 19:04:20.383231   47515 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.72 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0229 19:04:20.385225   47515 out.go:177] * Verifying Kubernetes components...
	I0229 19:04:20.386616   47515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 19:04:20.395136   47515 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 19:04:20.442555   47515 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 19:04:20.442575   47515 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0229 19:04:20.584731   47515 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 19:04:21.931286   47515 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.772173305s)
	I0229 19:04:21.931338   47515 main.go:141] libmachine: Making call to close driver server
	I0229 19:04:21.931350   47515 main.go:141] libmachine: (no-preload-247197) Calling .Close
	I0229 19:04:21.931346   47515 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.565356284s)
	I0229 19:04:21.931374   47515 start.go:929] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0229 19:04:21.931413   47515 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.544778173s)
	I0229 19:04:21.931439   47515 node_ready.go:35] waiting up to 6m0s for node "no-preload-247197" to be "Ready" ...
	I0229 19:04:21.931456   47515 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.536286802s)
	I0229 19:04:21.931484   47515 main.go:141] libmachine: Making call to close driver server
	I0229 19:04:21.931493   47515 main.go:141] libmachine: (no-preload-247197) Calling .Close
	I0229 19:04:21.932214   47515 main.go:141] libmachine: (no-preload-247197) DBG | Closing plugin on server side
	I0229 19:04:21.932216   47515 main.go:141] libmachine: (no-preload-247197) DBG | Closing plugin on server side
	I0229 19:04:21.932230   47515 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:04:21.932243   47515 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:04:21.932252   47515 main.go:141] libmachine: Making call to close driver server
	I0229 19:04:21.932269   47515 main.go:141] libmachine: (no-preload-247197) Calling .Close
	I0229 19:04:21.932251   47515 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:04:21.932321   47515 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:04:21.932330   47515 main.go:141] libmachine: Making call to close driver server
	I0229 19:04:21.932340   47515 main.go:141] libmachine: (no-preload-247197) Calling .Close
	I0229 19:04:21.932458   47515 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:04:21.932470   47515 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:04:21.932629   47515 main.go:141] libmachine: (no-preload-247197) DBG | Closing plugin on server side
	I0229 19:04:21.932649   47515 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:04:21.932656   47515 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:04:21.949312   47515 main.go:141] libmachine: Making call to close driver server
	I0229 19:04:21.949338   47515 main.go:141] libmachine: (no-preload-247197) Calling .Close
	I0229 19:04:21.949619   47515 main.go:141] libmachine: (no-preload-247197) DBG | Closing plugin on server side
	I0229 19:04:21.949662   47515 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:04:21.949675   47515 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:04:21.951119   47515 node_ready.go:49] node "no-preload-247197" has status "Ready":"True"
	I0229 19:04:21.951138   47515 node_ready.go:38] duration metric: took 19.687343ms waiting for node "no-preload-247197" to be "Ready" ...
	I0229 19:04:21.951148   47515 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 19:04:21.965909   47515 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-4k6hl" in "kube-system" namespace to be "Ready" ...
	I0229 19:04:21.979164   47515 pod_ready.go:92] pod "coredns-76f75df574-4k6hl" in "kube-system" namespace has status "Ready":"True"
	I0229 19:04:21.979185   47515 pod_ready.go:81] duration metric: took 13.25328ms waiting for pod "coredns-76f75df574-4k6hl" in "kube-system" namespace to be "Ready" ...
	I0229 19:04:21.979197   47515 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-9z6k5" in "kube-system" namespace to be "Ready" ...
	I0229 19:04:21.987905   47515 pod_ready.go:92] pod "coredns-76f75df574-9z6k5" in "kube-system" namespace has status "Ready":"True"
	I0229 19:04:21.987924   47515 pod_ready.go:81] duration metric: took 8.719445ms waiting for pod "coredns-76f75df574-9z6k5" in "kube-system" namespace to be "Ready" ...
	I0229 19:04:21.987935   47515 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-247197" in "kube-system" namespace to be "Ready" ...
	I0229 19:04:21.992310   47515 pod_ready.go:92] pod "etcd-no-preload-247197" in "kube-system" namespace has status "Ready":"True"
	I0229 19:04:21.992328   47515 pod_ready.go:81] duration metric: took 4.385196ms waiting for pod "etcd-no-preload-247197" in "kube-system" namespace to be "Ready" ...
	I0229 19:04:21.992339   47515 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-247197" in "kube-system" namespace to be "Ready" ...
	I0229 19:04:21.999702   47515 pod_ready.go:92] pod "kube-apiserver-no-preload-247197" in "kube-system" namespace has status "Ready":"True"
	I0229 19:04:21.999722   47515 pod_ready.go:81] duration metric: took 7.374368ms waiting for pod "kube-apiserver-no-preload-247197" in "kube-system" namespace to be "Ready" ...
	I0229 19:04:21.999733   47515 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-247197" in "kube-system" namespace to be "Ready" ...
	I0229 19:04:22.010201   47515 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.425431238s)
	I0229 19:04:22.010236   47515 main.go:141] libmachine: Making call to close driver server
	I0229 19:04:22.010249   47515 main.go:141] libmachine: (no-preload-247197) Calling .Close
	I0229 19:04:22.010564   47515 main.go:141] libmachine: (no-preload-247197) DBG | Closing plugin on server side
	I0229 19:04:22.010605   47515 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:04:22.010614   47515 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:04:22.010635   47515 main.go:141] libmachine: Making call to close driver server
	I0229 19:04:22.010644   47515 main.go:141] libmachine: (no-preload-247197) Calling .Close
	I0229 19:04:22.010882   47515 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:04:22.010900   47515 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:04:22.010910   47515 main.go:141] libmachine: (no-preload-247197) DBG | Closing plugin on server side
	I0229 19:04:22.010910   47515 addons.go:470] Verifying addon metrics-server=true in "no-preload-247197"
	I0229 19:04:22.013314   47515 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0229 19:04:22.014366   47515 addons.go:505] enable addons completed in 2.137254118s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0229 19:04:22.338772   47515 pod_ready.go:92] pod "kube-controller-manager-no-preload-247197" in "kube-system" namespace has status "Ready":"True"
	I0229 19:04:22.338799   47515 pod_ready.go:81] duration metric: took 339.058404ms waiting for pod "kube-controller-manager-no-preload-247197" in "kube-system" namespace to be "Ready" ...
	I0229 19:04:22.338812   47515 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vvkjv" in "kube-system" namespace to be "Ready" ...
	I0229 19:04:22.737254   47515 pod_ready.go:92] pod "kube-proxy-vvkjv" in "kube-system" namespace has status "Ready":"True"
	I0229 19:04:22.737280   47515 pod_ready.go:81] duration metric: took 398.461074ms waiting for pod "kube-proxy-vvkjv" in "kube-system" namespace to be "Ready" ...
	I0229 19:04:22.737294   47515 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-247197" in "kube-system" namespace to be "Ready" ...
	I0229 19:04:20.370710   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:22.866800   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:20.846680   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:23.345140   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:23.135406   47515 pod_ready.go:92] pod "kube-scheduler-no-preload-247197" in "kube-system" namespace has status "Ready":"True"
	I0229 19:04:23.135428   47515 pod_ready.go:81] duration metric: took 398.125345ms waiting for pod "kube-scheduler-no-preload-247197" in "kube-system" namespace to be "Ready" ...
	I0229 19:04:23.135440   47515 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace to be "Ready" ...
	I0229 19:04:25.142619   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:27.143696   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:25.367175   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:27.380854   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:25.346266   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:27.844825   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:29.846222   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:29.642557   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:32.143195   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:29.866361   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:32.365864   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:32.344240   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:34.345406   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:34.642612   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:36.642921   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:34.366701   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:36.865897   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:38.866354   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:36.845225   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:39.344488   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:39.142773   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:41.643462   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:40.866485   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:43.367569   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:41.345439   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:43.346065   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:44.142927   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:46.642548   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:45.369460   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:47.867209   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:45.845033   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:47.845603   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:48.643538   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:51.143346   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:50.365414   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:52.366281   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:50.609556   47919 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 19:04:50.610341   47919 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 19:04:50.610592   47919 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 19:04:50.347163   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:52.846321   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:54.847146   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:53.643605   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:55.644824   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:54.866162   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:57.366119   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:55.610941   47919 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 19:04:55.611235   47919 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 19:04:57.345852   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:59.846768   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:58.141799   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:00.142827   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:02.642593   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:59.867791   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:02.366238   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:02.345863   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:04.844340   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:04.643708   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:07.142551   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:04.367016   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:06.866170   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:08.869317   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:05.611726   47919 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 19:05:05.611996   47919 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 19:05:06.846686   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:08.846956   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:09.143595   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:11.143779   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:11.367337   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:13.865929   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:11.345732   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:13.346279   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:13.644332   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:16.143576   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:15.866653   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:18.366706   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:15.844887   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:17.846717   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:18.642599   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:20.642837   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:22.643895   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:20.368483   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:22.866758   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:20.346170   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:22.845477   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:25.142628   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:27.643975   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:25.366726   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:27.866780   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:25.612622   47919 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 19:05:25.612856   47919 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 19:05:25.346171   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:27.346624   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:29.844724   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:30.142942   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:32.143445   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:30.367152   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:32.865657   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:31.845835   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:34.347482   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:34.642780   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:36.642919   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:34.870444   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:37.367617   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:36.844507   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:38.845472   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:38.643505   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:41.142928   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:39.865207   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:41.867210   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:41.344604   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:43.347346   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:43.143348   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:45.143659   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:47.643054   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:44.366192   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:46.368043   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:48.867455   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:45.844395   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:47.845753   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:50.143481   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:52.643947   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:51.365758   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:53.866493   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:50.344819   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:52.346315   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:54.845777   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:55.145751   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:57.644326   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:55.866532   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:57.866831   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:56.845928   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:59.345840   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:00.142068   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:02.142779   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:59.870256   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:02.365280   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:01.845248   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:04.347842   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:05.613204   47919 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 19:06:05.613467   47919 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 19:06:05.613495   47919 kubeadm.go:322] 
	I0229 19:06:05.613547   47919 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 19:06:05.613598   47919 kubeadm.go:322] 	timed out waiting for the condition
	I0229 19:06:05.613608   47919 kubeadm.go:322] 
	I0229 19:06:05.613653   47919 kubeadm.go:322] This error is likely caused by:
	I0229 19:06:05.613694   47919 kubeadm.go:322] 	- The kubelet is not running
	I0229 19:06:05.613814   47919 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 19:06:05.613823   47919 kubeadm.go:322] 
	I0229 19:06:05.613911   47919 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 19:06:05.613941   47919 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 19:06:05.613974   47919 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 19:06:05.613980   47919 kubeadm.go:322] 
	I0229 19:06:05.614107   47919 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 19:06:05.614240   47919 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 19:06:05.614361   47919 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 19:06:05.614432   47919 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 19:06:05.614533   47919 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 19:06:05.614577   47919 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0229 19:06:05.615575   47919 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 19:06:05.615689   47919 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 19:06:05.615765   47919 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 19:06:05.615822   47919 kubeadm.go:406] StartCluster complete in 8m8.067253054s
	I0229 19:06:05.615873   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:06:05.615920   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:06:05.671959   47919 cri.go:89] found id: ""
	I0229 19:06:05.671998   47919 logs.go:276] 0 containers: []
	W0229 19:06:05.672018   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:06:05.672025   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:06:05.672075   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:06:05.715832   47919 cri.go:89] found id: ""
	I0229 19:06:05.715853   47919 logs.go:276] 0 containers: []
	W0229 19:06:05.715860   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:06:05.715866   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:06:05.715911   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:06:05.755305   47919 cri.go:89] found id: ""
	I0229 19:06:05.755334   47919 logs.go:276] 0 containers: []
	W0229 19:06:05.755345   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:06:05.755351   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:06:05.755409   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:06:05.807907   47919 cri.go:89] found id: ""
	I0229 19:06:05.807938   47919 logs.go:276] 0 containers: []
	W0229 19:06:05.807950   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:06:05.807957   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:06:05.808015   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:06:05.892777   47919 cri.go:89] found id: ""
	I0229 19:06:05.892805   47919 logs.go:276] 0 containers: []
	W0229 19:06:05.892813   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:06:05.892818   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:06:05.892877   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:06:05.931488   47919 cri.go:89] found id: ""
	I0229 19:06:05.931516   47919 logs.go:276] 0 containers: []
	W0229 19:06:05.931527   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:06:05.931534   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:06:05.931578   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:06:05.971989   47919 cri.go:89] found id: ""
	I0229 19:06:05.972018   47919 logs.go:276] 0 containers: []
	W0229 19:06:05.972030   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:06:05.972037   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:06:05.972112   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:06:06.008174   47919 cri.go:89] found id: ""
	I0229 19:06:06.008198   47919 logs.go:276] 0 containers: []
	W0229 19:06:06.008208   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:06:06.008224   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:06:06.008241   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:06:06.024924   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:06:06.024953   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:06:06.111879   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:06:06.111904   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:06:06.111918   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:06:06.221563   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:06:06.221593   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:06:06.266861   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:06:06.266897   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 19:06:06.314923   47919 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0229 19:06:06.314971   47919 out.go:239] * 
	W0229 19:06:06.315043   47919 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 19:06:06.315065   47919 out.go:239] * 
	W0229 19:06:06.315824   47919 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 19:06:06.318988   47919 out.go:177] 
	W0229 19:06:06.320200   47919 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 19:06:06.320245   47919 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0229 19:06:06.320270   47919 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0229 19:06:06.321598   47919 out.go:177] 
	
	
	==> CRI-O <==
	Feb 29 19:06:08 old-k8s-version-631080 crio[643]: time="2024-02-29 19:06:08.123955286Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709233568123932938,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:105088,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7d10b978-1f8f-4ecc-a577-c3a50861ad3e name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 19:06:08 old-k8s-version-631080 crio[643]: time="2024-02-29 19:06:08.124764283Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1001eb7d-b90f-4ba0-b7f4-7229886f4aac name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:06:08 old-k8s-version-631080 crio[643]: time="2024-02-29 19:06:08.124845104Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1001eb7d-b90f-4ba0-b7f4-7229886f4aac name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:06:08 old-k8s-version-631080 crio[643]: time="2024-02-29 19:06:08.124889356Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=1001eb7d-b90f-4ba0-b7f4-7229886f4aac name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:06:08 old-k8s-version-631080 crio[643]: time="2024-02-29 19:06:08.170730440Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4e8f6615-60b5-4275-950e-32287c49c422 name=/runtime.v1.RuntimeService/Version
	Feb 29 19:06:08 old-k8s-version-631080 crio[643]: time="2024-02-29 19:06:08.170835503Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4e8f6615-60b5-4275-950e-32287c49c422 name=/runtime.v1.RuntimeService/Version
	Feb 29 19:06:08 old-k8s-version-631080 crio[643]: time="2024-02-29 19:06:08.172419096Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=11b69b7a-9271-47d2-a038-6d4b6fdf63ca name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 19:06:08 old-k8s-version-631080 crio[643]: time="2024-02-29 19:06:08.172879570Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709233568172855062,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:105088,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=11b69b7a-9271-47d2-a038-6d4b6fdf63ca name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 19:06:08 old-k8s-version-631080 crio[643]: time="2024-02-29 19:06:08.173693220Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e82c510e-83ed-4226-9378-8894c23b4ab2 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:06:08 old-k8s-version-631080 crio[643]: time="2024-02-29 19:06:08.173742989Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e82c510e-83ed-4226-9378-8894c23b4ab2 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:06:08 old-k8s-version-631080 crio[643]: time="2024-02-29 19:06:08.173774631Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=e82c510e-83ed-4226-9378-8894c23b4ab2 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:06:08 old-k8s-version-631080 crio[643]: time="2024-02-29 19:06:08.213405708Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c8221612-7b5e-4ed2-ba13-ace5eec651d3 name=/runtime.v1.RuntimeService/Version
	Feb 29 19:06:08 old-k8s-version-631080 crio[643]: time="2024-02-29 19:06:08.213533539Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c8221612-7b5e-4ed2-ba13-ace5eec651d3 name=/runtime.v1.RuntimeService/Version
	Feb 29 19:06:08 old-k8s-version-631080 crio[643]: time="2024-02-29 19:06:08.214723875Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4c903115-8f71-4c2e-9405-cfc8f2ef236b name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 19:06:08 old-k8s-version-631080 crio[643]: time="2024-02-29 19:06:08.215077096Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709233568215053726,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:105088,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4c903115-8f71-4c2e-9405-cfc8f2ef236b name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 19:06:08 old-k8s-version-631080 crio[643]: time="2024-02-29 19:06:08.215785051Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6fccd181-fec9-4794-8f13-b3222ad410f4 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:06:08 old-k8s-version-631080 crio[643]: time="2024-02-29 19:06:08.215855855Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6fccd181-fec9-4794-8f13-b3222ad410f4 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:06:08 old-k8s-version-631080 crio[643]: time="2024-02-29 19:06:08.215904011Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=6fccd181-fec9-4794-8f13-b3222ad410f4 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:06:08 old-k8s-version-631080 crio[643]: time="2024-02-29 19:06:08.258315489Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7b064d1f-2e7b-4b74-8c5f-7efa01355650 name=/runtime.v1.RuntimeService/Version
	Feb 29 19:06:08 old-k8s-version-631080 crio[643]: time="2024-02-29 19:06:08.258430446Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7b064d1f-2e7b-4b74-8c5f-7efa01355650 name=/runtime.v1.RuntimeService/Version
	Feb 29 19:06:08 old-k8s-version-631080 crio[643]: time="2024-02-29 19:06:08.266730640Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d3b52129-b43e-4237-8e44-a9a14abfa8f5 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 19:06:08 old-k8s-version-631080 crio[643]: time="2024-02-29 19:06:08.267247380Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709233568267219342,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:105088,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d3b52129-b43e-4237-8e44-a9a14abfa8f5 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 19:06:08 old-k8s-version-631080 crio[643]: time="2024-02-29 19:06:08.268527288Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=67b4ed9d-fa54-4e99-af2b-3bf3caf89562 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:06:08 old-k8s-version-631080 crio[643]: time="2024-02-29 19:06:08.268792339Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=67b4ed9d-fa54-4e99-af2b-3bf3caf89562 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:06:08 old-k8s-version-631080 crio[643]: time="2024-02-29 19:06:08.269037746Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=67b4ed9d-fa54-4e99-af2b-3bf3caf89562 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Feb29 18:57] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053084] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.047040] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.651606] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.237160] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.709570] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000015] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.273436] systemd-fstab-generator[564]: Ignoring "noauto" option for root device
	[  +0.071452] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.078075] systemd-fstab-generator[576]: Ignoring "noauto" option for root device
	[  +0.234498] systemd-fstab-generator[590]: Ignoring "noauto" option for root device
	[  +0.167610] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.309321] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[Feb29 18:58] systemd-fstab-generator[945]: Ignoring "noauto" option for root device
	[  +0.062684] kauditd_printk_skb: 130 callbacks suppressed
	[Feb29 19:02] systemd-fstab-generator[8056]: Ignoring "noauto" option for root device
	[  +0.069082] kauditd_printk_skb: 21 callbacks suppressed
	[Feb29 19:04] systemd-fstab-generator[9767]: Ignoring "noauto" option for root device
	[  +0.062408] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 19:06:08 up 8 min,  0 users,  load average: 0.45, 0.47, 0.25
	Linux old-k8s-version-631080 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Feb 29 19:06:06 old-k8s-version-631080 kubelet[11449]: F0229 19:06:06.656287   11449 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 29 19:06:06 old-k8s-version-631080 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 29 19:06:06 old-k8s-version-631080 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 29 19:06:07 old-k8s-version-631080 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 156.
	Feb 29 19:06:07 old-k8s-version-631080 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 29 19:06:07 old-k8s-version-631080 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 29 19:06:07 old-k8s-version-631080 kubelet[11468]: I0229 19:06:07.391913   11468 server.go:410] Version: v1.16.0
	Feb 29 19:06:07 old-k8s-version-631080 kubelet[11468]: I0229 19:06:07.392177   11468 plugins.go:100] No cloud provider specified.
	Feb 29 19:06:07 old-k8s-version-631080 kubelet[11468]: I0229 19:06:07.392189   11468 server.go:773] Client rotation is on, will bootstrap in background
	Feb 29 19:06:07 old-k8s-version-631080 kubelet[11468]: I0229 19:06:07.395262   11468 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 29 19:06:07 old-k8s-version-631080 kubelet[11468]: W0229 19:06:07.396309   11468 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 29 19:06:07 old-k8s-version-631080 kubelet[11468]: F0229 19:06:07.396347   11468 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 29 19:06:07 old-k8s-version-631080 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 29 19:06:07 old-k8s-version-631080 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 29 19:06:08 old-k8s-version-631080 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 157.
	Feb 29 19:06:08 old-k8s-version-631080 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 29 19:06:08 old-k8s-version-631080 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 29 19:06:08 old-k8s-version-631080 kubelet[11499]: I0229 19:06:08.135342   11499 server.go:410] Version: v1.16.0
	Feb 29 19:06:08 old-k8s-version-631080 kubelet[11499]: I0229 19:06:08.135615   11499 plugins.go:100] No cloud provider specified.
	Feb 29 19:06:08 old-k8s-version-631080 kubelet[11499]: I0229 19:06:08.135628   11499 server.go:773] Client rotation is on, will bootstrap in background
	Feb 29 19:06:08 old-k8s-version-631080 kubelet[11499]: I0229 19:06:08.138314   11499 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 29 19:06:08 old-k8s-version-631080 kubelet[11499]: W0229 19:06:08.139328   11499 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 29 19:06:08 old-k8s-version-631080 kubelet[11499]: F0229 19:06:08.139406   11499 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 29 19:06:08 old-k8s-version-631080 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 29 19:06:08 old-k8s-version-631080 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-631080 -n old-k8s-version-631080
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-631080 -n old-k8s-version-631080: exit status 2 (254.533569ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-631080" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (774.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-153528 -n default-k8s-diff-port-153528
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-153528 -n default-k8s-diff-port-153528: exit status 3 (3.167495256s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 18:53:30.003292   47988 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.210:22: connect: no route to host
	E0229 18:53:30.003308   47988 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.210:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-153528 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-153528 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153637029s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.210:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-153528 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-153528 -n default-k8s-diff-port-153528
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-153528 -n default-k8s-diff-port-153528: exit status 3 (3.062759933s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 18:53:39.219416   48058 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.210:22: connect: no route to host
	E0229 18:53:39.219437   48058 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.210:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-153528" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.84s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
E0229 19:07:43.785567   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/addons-848237/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
E0229 19:07:46.663001   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/functional-531072/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
E0229 19:12:43.785847   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/addons-848237/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
E0229 19:12:46.662991   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/functional-531072/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-631080 -n old-k8s-version-631080
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-631080 -n old-k8s-version-631080: exit status 2 (249.644378ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-631080" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-631080 -n old-k8s-version-631080
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-631080 -n old-k8s-version-631080: exit status 2 (244.136208ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-631080 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-631080 logs -n 25: (1.803995413s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p kubernetes-upgrade-541086                           | kubernetes-upgrade-541086    | jenkins | v1.32.0 | 29 Feb 24 18:47 UTC | 29 Feb 24 18:47 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-541086                           | kubernetes-upgrade-541086    | jenkins | v1.32.0 | 29 Feb 24 18:47 UTC | 29 Feb 24 18:47 UTC |
	| start   | -p no-preload-247197                                   | no-preload-247197            | jenkins | v1.32.0 | 29 Feb 24 18:47 UTC | 29 Feb 24 18:49 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p pause-848791                                        | pause-848791                 | jenkins | v1.32.0 | 29 Feb 24 18:48 UTC | 29 Feb 24 18:48 UTC |
	| start   | -p embed-certs-991128                                  | embed-certs-991128           | jenkins | v1.32.0 | 29 Feb 24 18:48 UTC | 29 Feb 24 18:49 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-393248                              | cert-expiration-393248       | jenkins | v1.32.0 | 29 Feb 24 18:49 UTC | 29 Feb 24 18:49 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-393248                              | cert-expiration-393248       | jenkins | v1.32.0 | 29 Feb 24 18:49 UTC | 29 Feb 24 18:49 UTC |
	| delete  | -p                                                     | disable-driver-mounts-599421 | jenkins | v1.32.0 | 29 Feb 24 18:49 UTC | 29 Feb 24 18:49 UTC |
	|         | disable-driver-mounts-599421                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-153528 | jenkins | v1.32.0 | 29 Feb 24 18:49 UTC | 29 Feb 24 18:50 UTC |
	|         | default-k8s-diff-port-153528                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-247197             | no-preload-247197            | jenkins | v1.32.0 | 29 Feb 24 18:49 UTC | 29 Feb 24 18:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-247197                                   | no-preload-247197            | jenkins | v1.32.0 | 29 Feb 24 18:49 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-991128            | embed-certs-991128           | jenkins | v1.32.0 | 29 Feb 24 18:50 UTC | 29 Feb 24 18:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-991128                                  | embed-certs-991128           | jenkins | v1.32.0 | 29 Feb 24 18:50 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-153528  | default-k8s-diff-port-153528 | jenkins | v1.32.0 | 29 Feb 24 18:51 UTC | 29 Feb 24 18:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-153528 | jenkins | v1.32.0 | 29 Feb 24 18:51 UTC |                     |
	|         | default-k8s-diff-port-153528                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-631080        | old-k8s-version-631080       | jenkins | v1.32.0 | 29 Feb 24 18:51 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-247197                  | no-preload-247197            | jenkins | v1.32.0 | 29 Feb 24 18:52 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-991128                 | embed-certs-991128           | jenkins | v1.32.0 | 29 Feb 24 18:52 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-247197                                   | no-preload-247197            | jenkins | v1.32.0 | 29 Feb 24 18:52 UTC | 29 Feb 24 19:08 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| start   | -p embed-certs-991128                                  | embed-certs-991128           | jenkins | v1.32.0 | 29 Feb 24 18:52 UTC | 29 Feb 24 19:07 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-631080                              | old-k8s-version-631080       | jenkins | v1.32.0 | 29 Feb 24 18:53 UTC | 29 Feb 24 18:53 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-631080             | old-k8s-version-631080       | jenkins | v1.32.0 | 29 Feb 24 18:53 UTC | 29 Feb 24 18:53 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-631080                              | old-k8s-version-631080       | jenkins | v1.32.0 | 29 Feb 24 18:53 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-153528       | default-k8s-diff-port-153528 | jenkins | v1.32.0 | 29 Feb 24 18:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-153528 | jenkins | v1.32.0 | 29 Feb 24 18:53 UTC | 29 Feb 24 19:07 UTC |
	|         | default-k8s-diff-port-153528                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 18:53:39
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 18:53:39.272407   48088 out.go:291] Setting OutFile to fd 1 ...
	I0229 18:53:39.272662   48088 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:53:39.272672   48088 out.go:304] Setting ErrFile to fd 2...
	I0229 18:53:39.272676   48088 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:53:39.272900   48088 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6428/.minikube/bin
	I0229 18:53:39.273517   48088 out.go:298] Setting JSON to false
	I0229 18:53:39.274405   48088 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5763,"bootTime":1709227056,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 18:53:39.274466   48088 start.go:139] virtualization: kvm guest
	I0229 18:53:39.276633   48088 out.go:177] * [default-k8s-diff-port-153528] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 18:53:39.278195   48088 out.go:177]   - MINIKUBE_LOCATION=18259
	I0229 18:53:39.278144   48088 notify.go:220] Checking for updates...
	I0229 18:53:39.280040   48088 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 18:53:39.281568   48088 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18259-6428/kubeconfig
	I0229 18:53:39.282972   48088 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6428/.minikube
	I0229 18:53:39.284383   48088 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 18:53:39.285858   48088 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 18:53:39.287467   48088 config.go:182] Loaded profile config "default-k8s-diff-port-153528": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 18:53:39.287851   48088 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 18:53:39.287889   48088 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:53:39.302503   48088 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39523
	I0229 18:53:39.302895   48088 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:53:39.303402   48088 main.go:141] libmachine: Using API Version  1
	I0229 18:53:39.303427   48088 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:53:39.303737   48088 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:53:39.303893   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .DriverName
	I0229 18:53:39.304118   48088 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 18:53:39.304507   48088 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 18:53:39.304554   48088 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:53:39.318572   48088 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41347
	I0229 18:53:39.318978   48088 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:53:39.319454   48088 main.go:141] libmachine: Using API Version  1
	I0229 18:53:39.319482   48088 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:53:39.319748   48088 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:53:39.319924   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .DriverName
	I0229 18:53:39.351526   48088 out.go:177] * Using the kvm2 driver based on existing profile
	I0229 18:53:39.352970   48088 start.go:299] selected driver: kvm2
	I0229 18:53:39.352988   48088 start.go:903] validating driver "kvm2" against &{Name:default-k8s-diff-port-153528 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-153528 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.210 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:53:39.353115   48088 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 18:53:39.353788   48088 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:53:39.353869   48088 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18259-6428/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 18:53:39.369184   48088 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 18:53:39.369569   48088 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0229 18:53:39.369647   48088 cni.go:84] Creating CNI manager for ""
	I0229 18:53:39.369664   48088 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 18:53:39.369679   48088 start_flags.go:323] config:
	{Name:default-k8s-diff-port-153528 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-15352
8 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.210 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:53:39.369878   48088 iso.go:125] acquiring lock: {Name:mk4b55cee5696fff78a1389276e4e011ad655e56 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:53:39.372634   48088 out.go:177] * Starting control plane node default-k8s-diff-port-153528 in cluster default-k8s-diff-port-153528
	I0229 18:53:41.043270   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:53:39.373930   48088 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0229 18:53:39.373998   48088 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0229 18:53:39.374011   48088 cache.go:56] Caching tarball of preloaded images
	I0229 18:53:39.374104   48088 preload.go:174] Found /home/jenkins/minikube-integration/18259-6428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0229 18:53:39.374116   48088 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0229 18:53:39.374227   48088 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/default-k8s-diff-port-153528/config.json ...
	I0229 18:53:39.374456   48088 start.go:365] acquiring machines lock for default-k8s-diff-port-153528: {Name:mk08b242c6b6b7cd4a6843a7d3821a8b435540dd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 18:53:44.115305   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:53:50.195317   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:53:53.267316   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:53:59.347225   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:54:02.419258   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:54:08.499302   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:54:11.571267   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:54:17.651296   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:54:20.723290   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:54:26.803304   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:54:29.875293   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:54:35.955253   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:54:39.027319   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:54:45.107197   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:54:48.179318   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:54:54.259261   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:54:57.331310   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:55:03.411271   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:55:06.483320   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:55:12.563270   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:55:15.635250   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:55:21.715338   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:55:24.787238   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:55:30.867305   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:55:33.939296   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:55:40.019217   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:55:43.091236   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:55:49.171281   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:55:52.243241   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:55:58.323315   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:56:01.395368   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:56:07.475286   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:56:10.547288   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:56:16.627301   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:56:19.699291   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:56:25.779304   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:56:28.851346   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:56:34.931303   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:56:38.003301   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:56:44.083295   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:56:47.155306   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:56:53.235287   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:56:56.307311   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:57:02.387296   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:57:05.391079   47608 start.go:369] acquired machines lock for "embed-certs-991128" in 4m30.01926313s
	I0229 18:57:05.391125   47608 start.go:96] Skipping create...Using existing machine configuration
	I0229 18:57:05.391130   47608 fix.go:54] fixHost starting: 
	I0229 18:57:05.391473   47608 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 18:57:05.391502   47608 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:57:05.406385   47608 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38019
	I0229 18:57:05.406855   47608 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:57:05.407342   47608 main.go:141] libmachine: Using API Version  1
	I0229 18:57:05.407366   47608 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:57:05.407730   47608 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:57:05.407939   47608 main.go:141] libmachine: (embed-certs-991128) Calling .DriverName
	I0229 18:57:05.408088   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetState
	I0229 18:57:05.409862   47608 fix.go:102] recreateIfNeeded on embed-certs-991128: state=Stopped err=<nil>
	I0229 18:57:05.409895   47608 main.go:141] libmachine: (embed-certs-991128) Calling .DriverName
	W0229 18:57:05.410005   47608 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 18:57:05.411812   47608 out.go:177] * Restarting existing kvm2 VM for "embed-certs-991128" ...
	I0229 18:57:05.389096   47515 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 18:57:05.389139   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHHostname
	I0229 18:57:05.390953   47515 machine.go:91] provisioned docker machine in 4m37.390712428s
	I0229 18:57:05.390991   47515 fix.go:56] fixHost completed within 4m37.410903519s
	I0229 18:57:05.390997   47515 start.go:83] releasing machines lock for "no-preload-247197", held for 4m37.410926595s
	W0229 18:57:05.391017   47515 start.go:694] error starting host: provision: host is not running
	W0229 18:57:05.391155   47515 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0229 18:57:05.391169   47515 start.go:709] Will try again in 5 seconds ...
	I0229 18:57:05.413295   47608 main.go:141] libmachine: (embed-certs-991128) Calling .Start
	I0229 18:57:05.413478   47608 main.go:141] libmachine: (embed-certs-991128) Ensuring networks are active...
	I0229 18:57:05.414184   47608 main.go:141] libmachine: (embed-certs-991128) Ensuring network default is active
	I0229 18:57:05.414495   47608 main.go:141] libmachine: (embed-certs-991128) Ensuring network mk-embed-certs-991128 is active
	I0229 18:57:05.414834   47608 main.go:141] libmachine: (embed-certs-991128) Getting domain xml...
	I0229 18:57:05.415508   47608 main.go:141] libmachine: (embed-certs-991128) Creating domain...
	I0229 18:57:06.606675   47608 main.go:141] libmachine: (embed-certs-991128) Waiting to get IP...
	I0229 18:57:06.607445   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:06.607771   47608 main.go:141] libmachine: (embed-certs-991128) DBG | unable to find current IP address of domain embed-certs-991128 in network mk-embed-certs-991128
	I0229 18:57:06.607826   47608 main.go:141] libmachine: (embed-certs-991128) DBG | I0229 18:57:06.607762   48607 retry.go:31] will retry after 250.745087ms: waiting for machine to come up
	I0229 18:57:06.860293   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:06.860711   47608 main.go:141] libmachine: (embed-certs-991128) DBG | unable to find current IP address of domain embed-certs-991128 in network mk-embed-certs-991128
	I0229 18:57:06.860738   47608 main.go:141] libmachine: (embed-certs-991128) DBG | I0229 18:57:06.860671   48607 retry.go:31] will retry after 259.096096ms: waiting for machine to come up
	I0229 18:57:07.121033   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:07.121429   47608 main.go:141] libmachine: (embed-certs-991128) DBG | unable to find current IP address of domain embed-certs-991128 in network mk-embed-certs-991128
	I0229 18:57:07.121458   47608 main.go:141] libmachine: (embed-certs-991128) DBG | I0229 18:57:07.121381   48607 retry.go:31] will retry after 318.126905ms: waiting for machine to come up
	I0229 18:57:07.440859   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:07.441299   47608 main.go:141] libmachine: (embed-certs-991128) DBG | unable to find current IP address of domain embed-certs-991128 in network mk-embed-certs-991128
	I0229 18:57:07.441328   47608 main.go:141] libmachine: (embed-certs-991128) DBG | I0229 18:57:07.441243   48607 retry.go:31] will retry after 570.321317ms: waiting for machine to come up
	I0229 18:57:08.012896   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:08.013331   47608 main.go:141] libmachine: (embed-certs-991128) DBG | unable to find current IP address of domain embed-certs-991128 in network mk-embed-certs-991128
	I0229 18:57:08.013367   47608 main.go:141] libmachine: (embed-certs-991128) DBG | I0229 18:57:08.013295   48607 retry.go:31] will retry after 489.540139ms: waiting for machine to come up
	I0229 18:57:08.503916   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:08.504321   47608 main.go:141] libmachine: (embed-certs-991128) DBG | unable to find current IP address of domain embed-certs-991128 in network mk-embed-certs-991128
	I0229 18:57:08.504358   47608 main.go:141] libmachine: (embed-certs-991128) DBG | I0229 18:57:08.504269   48607 retry.go:31] will retry after 929.011093ms: waiting for machine to come up
	I0229 18:57:09.435395   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:09.435803   47608 main.go:141] libmachine: (embed-certs-991128) DBG | unable to find current IP address of domain embed-certs-991128 in network mk-embed-certs-991128
	I0229 18:57:09.435851   47608 main.go:141] libmachine: (embed-certs-991128) DBG | I0229 18:57:09.435761   48607 retry.go:31] will retry after 1.087849565s: waiting for machine to come up
	I0229 18:57:10.391806   47515 start.go:365] acquiring machines lock for no-preload-247197: {Name:mk08b242c6b6b7cd4a6843a7d3821a8b435540dd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 18:57:10.525247   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:10.525663   47608 main.go:141] libmachine: (embed-certs-991128) DBG | unable to find current IP address of domain embed-certs-991128 in network mk-embed-certs-991128
	I0229 18:57:10.525697   47608 main.go:141] libmachine: (embed-certs-991128) DBG | I0229 18:57:10.525612   48607 retry.go:31] will retry after 954.10405ms: waiting for machine to come up
	I0229 18:57:11.481162   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:11.481610   47608 main.go:141] libmachine: (embed-certs-991128) DBG | unable to find current IP address of domain embed-certs-991128 in network mk-embed-certs-991128
	I0229 18:57:11.481640   47608 main.go:141] libmachine: (embed-certs-991128) DBG | I0229 18:57:11.481558   48607 retry.go:31] will retry after 1.495484693s: waiting for machine to come up
	I0229 18:57:12.979123   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:12.979547   47608 main.go:141] libmachine: (embed-certs-991128) DBG | unable to find current IP address of domain embed-certs-991128 in network mk-embed-certs-991128
	I0229 18:57:12.979572   47608 main.go:141] libmachine: (embed-certs-991128) DBG | I0229 18:57:12.979499   48607 retry.go:31] will retry after 2.307927756s: waiting for machine to come up
	I0229 18:57:15.288445   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:15.288841   47608 main.go:141] libmachine: (embed-certs-991128) DBG | unable to find current IP address of domain embed-certs-991128 in network mk-embed-certs-991128
	I0229 18:57:15.288871   47608 main.go:141] libmachine: (embed-certs-991128) DBG | I0229 18:57:15.288785   48607 retry.go:31] will retry after 2.89615753s: waiting for machine to come up
	I0229 18:57:18.188102   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:18.188474   47608 main.go:141] libmachine: (embed-certs-991128) DBG | unable to find current IP address of domain embed-certs-991128 in network mk-embed-certs-991128
	I0229 18:57:18.188504   47608 main.go:141] libmachine: (embed-certs-991128) DBG | I0229 18:57:18.188426   48607 retry.go:31] will retry after 3.511036368s: waiting for machine to come up
	I0229 18:57:21.701039   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:21.701395   47608 main.go:141] libmachine: (embed-certs-991128) DBG | unable to find current IP address of domain embed-certs-991128 in network mk-embed-certs-991128
	I0229 18:57:21.701425   47608 main.go:141] libmachine: (embed-certs-991128) DBG | I0229 18:57:21.701356   48607 retry.go:31] will retry after 3.516537008s: waiting for machine to come up
	I0229 18:57:25.220199   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:25.220641   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has current primary IP address 192.168.61.34 and MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:25.220655   47608 main.go:141] libmachine: (embed-certs-991128) Found IP for machine: 192.168.61.34
	I0229 18:57:25.220663   47608 main.go:141] libmachine: (embed-certs-991128) Reserving static IP address...
	I0229 18:57:25.221122   47608 main.go:141] libmachine: (embed-certs-991128) Reserved static IP address: 192.168.61.34
	I0229 18:57:25.221162   47608 main.go:141] libmachine: (embed-certs-991128) DBG | found host DHCP lease matching {name: "embed-certs-991128", mac: "52:54:00:44:76:e2", ip: "192.168.61.34"} in network mk-embed-certs-991128: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:29 +0000 UTC Type:0 Mac:52:54:00:44:76:e2 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:embed-certs-991128 Clientid:01:52:54:00:44:76:e2}
	I0229 18:57:25.221179   47608 main.go:141] libmachine: (embed-certs-991128) Waiting for SSH to be available...
	I0229 18:57:25.221222   47608 main.go:141] libmachine: (embed-certs-991128) DBG | skip adding static IP to network mk-embed-certs-991128 - found existing host DHCP lease matching {name: "embed-certs-991128", mac: "52:54:00:44:76:e2", ip: "192.168.61.34"}
	I0229 18:57:25.221243   47608 main.go:141] libmachine: (embed-certs-991128) DBG | Getting to WaitForSSH function...
	I0229 18:57:25.223450   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:25.223775   47608 main.go:141] libmachine: (embed-certs-991128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:76:e2", ip: ""} in network mk-embed-certs-991128: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:29 +0000 UTC Type:0 Mac:52:54:00:44:76:e2 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:embed-certs-991128 Clientid:01:52:54:00:44:76:e2}
	I0229 18:57:25.223809   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined IP address 192.168.61.34 and MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:25.223951   47608 main.go:141] libmachine: (embed-certs-991128) DBG | Using SSH client type: external
	I0229 18:57:25.223981   47608 main.go:141] libmachine: (embed-certs-991128) DBG | Using SSH private key: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/embed-certs-991128/id_rsa (-rw-------)
	I0229 18:57:25.224014   47608 main.go:141] libmachine: (embed-certs-991128) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.34 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18259-6428/.minikube/machines/embed-certs-991128/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 18:57:25.224032   47608 main.go:141] libmachine: (embed-certs-991128) DBG | About to run SSH command:
	I0229 18:57:25.224052   47608 main.go:141] libmachine: (embed-certs-991128) DBG | exit 0
	I0229 18:57:26.464131   47919 start.go:369] acquired machines lock for "old-k8s-version-631080" in 4m11.42071391s
	I0229 18:57:26.464193   47919 start.go:96] Skipping create...Using existing machine configuration
	I0229 18:57:26.464200   47919 fix.go:54] fixHost starting: 
	I0229 18:57:26.464621   47919 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 18:57:26.464657   47919 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:57:26.480155   47919 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46439
	I0229 18:57:26.480488   47919 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:57:26.481000   47919 main.go:141] libmachine: Using API Version  1
	I0229 18:57:26.481027   47919 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:57:26.481327   47919 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:57:26.481514   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .DriverName
	I0229 18:57:26.481669   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetState
	I0229 18:57:26.482869   47919 fix.go:102] recreateIfNeeded on old-k8s-version-631080: state=Stopped err=<nil>
	I0229 18:57:26.482885   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .DriverName
	W0229 18:57:26.483052   47919 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 18:57:26.485421   47919 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-631080" ...
	I0229 18:57:25.351081   47608 main.go:141] libmachine: (embed-certs-991128) DBG | SSH cmd err, output: <nil>: 
	I0229 18:57:25.351434   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetConfigRaw
	I0229 18:57:25.352022   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetIP
	I0229 18:57:25.354349   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:25.354705   47608 main.go:141] libmachine: (embed-certs-991128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:76:e2", ip: ""} in network mk-embed-certs-991128: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:29 +0000 UTC Type:0 Mac:52:54:00:44:76:e2 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:embed-certs-991128 Clientid:01:52:54:00:44:76:e2}
	I0229 18:57:25.354734   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined IP address 192.168.61.34 and MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:25.354944   47608 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/embed-certs-991128/config.json ...
	I0229 18:57:25.355150   47608 machine.go:88] provisioning docker machine ...
	I0229 18:57:25.355169   47608 main.go:141] libmachine: (embed-certs-991128) Calling .DriverName
	I0229 18:57:25.355351   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetMachineName
	I0229 18:57:25.355501   47608 buildroot.go:166] provisioning hostname "embed-certs-991128"
	I0229 18:57:25.355528   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetMachineName
	I0229 18:57:25.355763   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHHostname
	I0229 18:57:25.357784   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:25.358109   47608 main.go:141] libmachine: (embed-certs-991128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:76:e2", ip: ""} in network mk-embed-certs-991128: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:29 +0000 UTC Type:0 Mac:52:54:00:44:76:e2 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:embed-certs-991128 Clientid:01:52:54:00:44:76:e2}
	I0229 18:57:25.358134   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined IP address 192.168.61.34 and MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:25.358265   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHPort
	I0229 18:57:25.358429   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHKeyPath
	I0229 18:57:25.358567   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHKeyPath
	I0229 18:57:25.358683   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHUsername
	I0229 18:57:25.358840   47608 main.go:141] libmachine: Using SSH client type: native
	I0229 18:57:25.359062   47608 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.34 22 <nil> <nil>}
	I0229 18:57:25.359078   47608 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-991128 && echo "embed-certs-991128" | sudo tee /etc/hostname
	I0229 18:57:25.487161   47608 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-991128
	
	I0229 18:57:25.487197   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHHostname
	I0229 18:57:25.489979   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:25.490275   47608 main.go:141] libmachine: (embed-certs-991128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:76:e2", ip: ""} in network mk-embed-certs-991128: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:29 +0000 UTC Type:0 Mac:52:54:00:44:76:e2 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:embed-certs-991128 Clientid:01:52:54:00:44:76:e2}
	I0229 18:57:25.490308   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined IP address 192.168.61.34 and MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:25.490539   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHPort
	I0229 18:57:25.490755   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHKeyPath
	I0229 18:57:25.490908   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHKeyPath
	I0229 18:57:25.491047   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHUsername
	I0229 18:57:25.491191   47608 main.go:141] libmachine: Using SSH client type: native
	I0229 18:57:25.491377   47608 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.34 22 <nil> <nil>}
	I0229 18:57:25.491405   47608 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-991128' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-991128/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-991128' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 18:57:25.617911   47608 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 18:57:25.617941   47608 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18259-6428/.minikube CaCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18259-6428/.minikube}
	I0229 18:57:25.617961   47608 buildroot.go:174] setting up certificates
	I0229 18:57:25.617971   47608 provision.go:83] configureAuth start
	I0229 18:57:25.617980   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetMachineName
	I0229 18:57:25.618235   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetIP
	I0229 18:57:25.620943   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:25.621286   47608 main.go:141] libmachine: (embed-certs-991128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:76:e2", ip: ""} in network mk-embed-certs-991128: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:29 +0000 UTC Type:0 Mac:52:54:00:44:76:e2 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:embed-certs-991128 Clientid:01:52:54:00:44:76:e2}
	I0229 18:57:25.621318   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined IP address 192.168.61.34 and MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:25.621460   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHHostname
	I0229 18:57:25.623629   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:25.623936   47608 main.go:141] libmachine: (embed-certs-991128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:76:e2", ip: ""} in network mk-embed-certs-991128: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:29 +0000 UTC Type:0 Mac:52:54:00:44:76:e2 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:embed-certs-991128 Clientid:01:52:54:00:44:76:e2}
	I0229 18:57:25.623961   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined IP address 192.168.61.34 and MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:25.624074   47608 provision.go:138] copyHostCerts
	I0229 18:57:25.624133   47608 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem, removing ...
	I0229 18:57:25.624154   47608 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem
	I0229 18:57:25.624240   47608 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem (1082 bytes)
	I0229 18:57:25.624344   47608 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem, removing ...
	I0229 18:57:25.624355   47608 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem
	I0229 18:57:25.624383   47608 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem (1123 bytes)
	I0229 18:57:25.624455   47608 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem, removing ...
	I0229 18:57:25.624462   47608 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem
	I0229 18:57:25.624483   47608 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem (1675 bytes)
	I0229 18:57:25.624538   47608 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem org=jenkins.embed-certs-991128 san=[192.168.61.34 192.168.61.34 localhost 127.0.0.1 minikube embed-certs-991128]
	I0229 18:57:25.757225   47608 provision.go:172] copyRemoteCerts
	I0229 18:57:25.757278   47608 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 18:57:25.757301   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHHostname
	I0229 18:57:25.759794   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:25.760098   47608 main.go:141] libmachine: (embed-certs-991128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:76:e2", ip: ""} in network mk-embed-certs-991128: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:29 +0000 UTC Type:0 Mac:52:54:00:44:76:e2 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:embed-certs-991128 Clientid:01:52:54:00:44:76:e2}
	I0229 18:57:25.760125   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined IP address 192.168.61.34 and MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:25.760287   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHPort
	I0229 18:57:25.760488   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHKeyPath
	I0229 18:57:25.760664   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHUsername
	I0229 18:57:25.760798   47608 sshutil.go:53] new ssh client: &{IP:192.168.61.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/embed-certs-991128/id_rsa Username:docker}
	I0229 18:57:25.849527   47608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0229 18:57:25.875673   47608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 18:57:25.902046   47608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0229 18:57:25.927830   47608 provision.go:86] duration metric: configureAuth took 309.850774ms
	I0229 18:57:25.927862   47608 buildroot.go:189] setting minikube options for container-runtime
	I0229 18:57:25.928081   47608 config.go:182] Loaded profile config "embed-certs-991128": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 18:57:25.928163   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHHostname
	I0229 18:57:25.930565   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:25.930917   47608 main.go:141] libmachine: (embed-certs-991128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:76:e2", ip: ""} in network mk-embed-certs-991128: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:29 +0000 UTC Type:0 Mac:52:54:00:44:76:e2 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:embed-certs-991128 Clientid:01:52:54:00:44:76:e2}
	I0229 18:57:25.930945   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined IP address 192.168.61.34 and MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:25.931135   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHPort
	I0229 18:57:25.931336   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHKeyPath
	I0229 18:57:25.931493   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHKeyPath
	I0229 18:57:25.931649   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHUsername
	I0229 18:57:25.931806   47608 main.go:141] libmachine: Using SSH client type: native
	I0229 18:57:25.932003   47608 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.34 22 <nil> <nil>}
	I0229 18:57:25.932026   47608 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0229 18:57:26.205080   47608 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0229 18:57:26.205139   47608 machine.go:91] provisioned docker machine in 849.974413ms
	I0229 18:57:26.205154   47608 start.go:300] post-start starting for "embed-certs-991128" (driver="kvm2")
	I0229 18:57:26.205168   47608 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 18:57:26.205191   47608 main.go:141] libmachine: (embed-certs-991128) Calling .DriverName
	I0229 18:57:26.205537   47608 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 18:57:26.205568   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHHostname
	I0229 18:57:26.208107   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:26.208417   47608 main.go:141] libmachine: (embed-certs-991128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:76:e2", ip: ""} in network mk-embed-certs-991128: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:29 +0000 UTC Type:0 Mac:52:54:00:44:76:e2 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:embed-certs-991128 Clientid:01:52:54:00:44:76:e2}
	I0229 18:57:26.208443   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined IP address 192.168.61.34 and MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:26.208625   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHPort
	I0229 18:57:26.208804   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHKeyPath
	I0229 18:57:26.208975   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHUsername
	I0229 18:57:26.209084   47608 sshutil.go:53] new ssh client: &{IP:192.168.61.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/embed-certs-991128/id_rsa Username:docker}
	I0229 18:57:26.303090   47608 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 18:57:26.309522   47608 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 18:57:26.309543   47608 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6428/.minikube/addons for local assets ...
	I0229 18:57:26.309609   47608 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6428/.minikube/files for local assets ...
	I0229 18:57:26.309697   47608 filesync.go:149] local asset: /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem -> 136512.pem in /etc/ssl/certs
	I0229 18:57:26.309800   47608 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 18:57:26.319897   47608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem --> /etc/ssl/certs/136512.pem (1708 bytes)
	I0229 18:57:26.346220   47608 start.go:303] post-start completed in 141.055399ms
	I0229 18:57:26.346242   47608 fix.go:56] fixHost completed within 20.955110287s
	I0229 18:57:26.346265   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHHostname
	I0229 18:57:26.348878   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:26.349237   47608 main.go:141] libmachine: (embed-certs-991128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:76:e2", ip: ""} in network mk-embed-certs-991128: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:29 +0000 UTC Type:0 Mac:52:54:00:44:76:e2 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:embed-certs-991128 Clientid:01:52:54:00:44:76:e2}
	I0229 18:57:26.349278   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined IP address 192.168.61.34 and MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:26.349415   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHPort
	I0229 18:57:26.349591   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHKeyPath
	I0229 18:57:26.349742   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHKeyPath
	I0229 18:57:26.349860   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHUsername
	I0229 18:57:26.350032   47608 main.go:141] libmachine: Using SSH client type: native
	I0229 18:57:26.350224   47608 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.34 22 <nil> <nil>}
	I0229 18:57:26.350235   47608 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 18:57:26.463992   47608 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709233046.436502673
	
	I0229 18:57:26.464017   47608 fix.go:206] guest clock: 1709233046.436502673
	I0229 18:57:26.464027   47608 fix.go:219] Guest: 2024-02-29 18:57:26.436502673 +0000 UTC Remote: 2024-02-29 18:57:26.346246091 +0000 UTC m=+291.120011459 (delta=90.256582ms)
	I0229 18:57:26.464055   47608 fix.go:190] guest clock delta is within tolerance: 90.256582ms
	I0229 18:57:26.464062   47608 start.go:83] releasing machines lock for "embed-certs-991128", held for 21.072955529s
	I0229 18:57:26.464099   47608 main.go:141] libmachine: (embed-certs-991128) Calling .DriverName
	I0229 18:57:26.464362   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetIP
	I0229 18:57:26.466954   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:26.467308   47608 main.go:141] libmachine: (embed-certs-991128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:76:e2", ip: ""} in network mk-embed-certs-991128: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:29 +0000 UTC Type:0 Mac:52:54:00:44:76:e2 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:embed-certs-991128 Clientid:01:52:54:00:44:76:e2}
	I0229 18:57:26.467350   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined IP address 192.168.61.34 and MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:26.467452   47608 main.go:141] libmachine: (embed-certs-991128) Calling .DriverName
	I0229 18:57:26.468058   47608 main.go:141] libmachine: (embed-certs-991128) Calling .DriverName
	I0229 18:57:26.468227   47608 main.go:141] libmachine: (embed-certs-991128) Calling .DriverName
	I0229 18:57:26.468287   47608 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 18:57:26.468356   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHHostname
	I0229 18:57:26.468456   47608 ssh_runner.go:195] Run: cat /version.json
	I0229 18:57:26.468477   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHHostname
	I0229 18:57:26.470917   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:26.470996   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:26.471291   47608 main.go:141] libmachine: (embed-certs-991128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:76:e2", ip: ""} in network mk-embed-certs-991128: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:29 +0000 UTC Type:0 Mac:52:54:00:44:76:e2 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:embed-certs-991128 Clientid:01:52:54:00:44:76:e2}
	I0229 18:57:26.471322   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined IP address 192.168.61.34 and MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:26.471352   47608 main.go:141] libmachine: (embed-certs-991128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:76:e2", ip: ""} in network mk-embed-certs-991128: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:29 +0000 UTC Type:0 Mac:52:54:00:44:76:e2 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:embed-certs-991128 Clientid:01:52:54:00:44:76:e2}
	I0229 18:57:26.471369   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined IP address 192.168.61.34 and MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:26.471562   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHPort
	I0229 18:57:26.471602   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHPort
	I0229 18:57:26.471719   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHKeyPath
	I0229 18:57:26.471783   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHKeyPath
	I0229 18:57:26.471873   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHUsername
	I0229 18:57:26.471940   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHUsername
	I0229 18:57:26.472005   47608 sshutil.go:53] new ssh client: &{IP:192.168.61.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/embed-certs-991128/id_rsa Username:docker}
	I0229 18:57:26.472095   47608 sshutil.go:53] new ssh client: &{IP:192.168.61.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/embed-certs-991128/id_rsa Username:docker}
	I0229 18:57:26.560629   47608 ssh_runner.go:195] Run: systemctl --version
	I0229 18:57:26.587852   47608 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0229 18:57:26.752819   47608 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 18:57:26.760557   47608 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 18:57:26.760629   47608 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 18:57:26.778065   47608 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 18:57:26.778096   47608 start.go:475] detecting cgroup driver to use...
	I0229 18:57:26.778156   47608 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 18:57:26.795970   47608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 18:57:26.810591   47608 docker.go:217] disabling cri-docker service (if available) ...
	I0229 18:57:26.810634   47608 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 18:57:26.826715   47608 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 18:57:26.840879   47608 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 18:57:26.959536   47608 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 18:57:27.143802   47608 docker.go:233] disabling docker service ...
	I0229 18:57:27.143856   47608 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 18:57:27.164748   47608 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 18:57:27.183161   47608 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 18:57:27.322659   47608 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 18:57:27.471650   47608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 18:57:27.489290   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 18:57:27.512706   47608 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0229 18:57:27.512770   47608 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:57:27.524596   47608 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0229 18:57:27.524657   47608 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:57:27.536202   47608 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:57:27.547343   47608 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:57:27.558390   47608 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 18:57:27.571297   47608 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 18:57:27.580859   47608 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 18:57:27.580903   47608 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0229 18:57:27.595324   47608 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 18:57:27.606130   47608 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:57:27.736363   47608 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0229 18:57:27.877719   47608 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0229 18:57:27.877804   47608 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0229 18:57:27.882920   47608 start.go:543] Will wait 60s for crictl version
	I0229 18:57:27.883035   47608 ssh_runner.go:195] Run: which crictl
	I0229 18:57:27.887132   47608 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 18:57:27.925964   47608 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0229 18:57:27.926061   47608 ssh_runner.go:195] Run: crio --version
	I0229 18:57:27.958046   47608 ssh_runner.go:195] Run: crio --version
	I0229 18:57:27.991575   47608 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0229 18:57:26.486586   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .Start
	I0229 18:57:26.486734   47919 main.go:141] libmachine: (old-k8s-version-631080) Ensuring networks are active...
	I0229 18:57:26.487377   47919 main.go:141] libmachine: (old-k8s-version-631080) Ensuring network default is active
	I0229 18:57:26.487679   47919 main.go:141] libmachine: (old-k8s-version-631080) Ensuring network mk-old-k8s-version-631080 is active
	I0229 18:57:26.488006   47919 main.go:141] libmachine: (old-k8s-version-631080) Getting domain xml...
	I0229 18:57:26.488624   47919 main.go:141] libmachine: (old-k8s-version-631080) Creating domain...
	I0229 18:57:27.689480   47919 main.go:141] libmachine: (old-k8s-version-631080) Waiting to get IP...
	I0229 18:57:27.690414   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:27.690858   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:57:27.690932   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:57:27.690848   48724 retry.go:31] will retry after 309.860592ms: waiting for machine to come up
	I0229 18:57:28.002437   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:28.002926   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:57:28.002959   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:57:28.002884   48724 retry.go:31] will retry after 298.018759ms: waiting for machine to come up
	I0229 18:57:28.302325   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:28.302849   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:57:28.302879   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:57:28.302801   48724 retry.go:31] will retry after 312.821928ms: waiting for machine to come up
	I0229 18:57:28.617315   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:28.617797   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:57:28.617831   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:57:28.617753   48724 retry.go:31] will retry after 373.960028ms: waiting for machine to come up
	I0229 18:57:28.993230   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:28.993860   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:57:28.993881   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:57:28.993809   48724 retry.go:31] will retry after 516.423282ms: waiting for machine to come up
	I0229 18:57:29.512208   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:29.512683   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:57:29.512718   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:57:29.512651   48724 retry.go:31] will retry after 776.839747ms: waiting for machine to come up
	I0229 18:57:27.992835   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetIP
	I0229 18:57:27.995847   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:27.996225   47608 main.go:141] libmachine: (embed-certs-991128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:76:e2", ip: ""} in network mk-embed-certs-991128: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:29 +0000 UTC Type:0 Mac:52:54:00:44:76:e2 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:embed-certs-991128 Clientid:01:52:54:00:44:76:e2}
	I0229 18:57:27.996255   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined IP address 192.168.61.34 and MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:27.996483   47608 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0229 18:57:28.001148   47608 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:57:28.016232   47608 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0229 18:57:28.016293   47608 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 18:57:28.055181   47608 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0229 18:57:28.055248   47608 ssh_runner.go:195] Run: which lz4
	I0229 18:57:28.059680   47608 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0229 18:57:28.064299   47608 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 18:57:28.064330   47608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0229 18:57:29.988576   47608 crio.go:444] Took 1.928948 seconds to copy over tarball
	I0229 18:57:29.988670   47608 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 18:57:30.290748   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:30.291228   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:57:30.291276   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:57:30.291195   48724 retry.go:31] will retry after 846.002471ms: waiting for machine to come up
	I0229 18:57:31.139734   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:31.140157   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:57:31.140177   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:57:31.140114   48724 retry.go:31] will retry after 1.01688411s: waiting for machine to come up
	I0229 18:57:32.158306   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:32.158845   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:57:32.158868   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:57:32.158827   48724 retry.go:31] will retry after 1.217119434s: waiting for machine to come up
	I0229 18:57:33.377121   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:33.377508   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:57:33.377538   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:57:33.377475   48724 retry.go:31] will retry after 1.566910779s: waiting for machine to come up
	I0229 18:57:32.844311   47608 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.855608287s)
	I0229 18:57:32.844344   47608 crio.go:451] Took 2.855747 seconds to extract the tarball
	I0229 18:57:32.844356   47608 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 18:57:32.890199   47608 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 18:57:32.953328   47608 crio.go:496] all images are preloaded for cri-o runtime.
	I0229 18:57:32.953351   47608 cache_images.go:84] Images are preloaded, skipping loading
	I0229 18:57:32.953408   47608 ssh_runner.go:195] Run: crio config
	I0229 18:57:33.006678   47608 cni.go:84] Creating CNI manager for ""
	I0229 18:57:33.006701   47608 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 18:57:33.006717   47608 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 18:57:33.006734   47608 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.34 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-991128 NodeName:embed-certs-991128 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.34"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.34 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 18:57:33.006872   47608 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.34
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-991128"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.34
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.34"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 18:57:33.006951   47608 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-991128 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.34
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-991128 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 18:57:33.006998   47608 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0229 18:57:33.018746   47608 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 18:57:33.018824   47608 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 18:57:33.029994   47608 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I0229 18:57:33.050522   47608 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 18:57:33.070313   47608 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I0229 18:57:33.091436   47608 ssh_runner.go:195] Run: grep 192.168.61.34	control-plane.minikube.internal$ /etc/hosts
	I0229 18:57:33.096253   47608 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.34	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:57:33.110683   47608 certs.go:56] Setting up /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/embed-certs-991128 for IP: 192.168.61.34
	I0229 18:57:33.110720   47608 certs.go:190] acquiring lock for shared ca certs: {Name:mk3505d2468da66ac20dc3ebf913782cecf1ba0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:57:33.110892   47608 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18259-6428/.minikube/ca.key
	I0229 18:57:33.110957   47608 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.key
	I0229 18:57:33.111075   47608 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/embed-certs-991128/client.key
	I0229 18:57:33.111147   47608 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/embed-certs-991128/apiserver.key.d8cf1313
	I0229 18:57:33.111195   47608 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/embed-certs-991128/proxy-client.key
	I0229 18:57:33.111320   47608 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651.pem (1338 bytes)
	W0229 18:57:33.111352   47608 certs.go:433] ignoring /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651_empty.pem, impossibly tiny 0 bytes
	I0229 18:57:33.111362   47608 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem (1675 bytes)
	I0229 18:57:33.111383   47608 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem (1082 bytes)
	I0229 18:57:33.111406   47608 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem (1123 bytes)
	I0229 18:57:33.111443   47608 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem (1675 bytes)
	I0229 18:57:33.111479   47608 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem (1708 bytes)
	I0229 18:57:33.112071   47608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/embed-certs-991128/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 18:57:33.143498   47608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/embed-certs-991128/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0229 18:57:33.171567   47608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/embed-certs-991128/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 18:57:33.199300   47608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/embed-certs-991128/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0229 18:57:33.226492   47608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 18:57:33.254025   47608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0229 18:57:33.281215   47608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 18:57:33.311188   47608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0229 18:57:33.342138   47608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 18:57:33.373884   47608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651.pem --> /usr/share/ca-certificates/13651.pem (1338 bytes)
	I0229 18:57:33.401130   47608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem --> /usr/share/ca-certificates/136512.pem (1708 bytes)
	I0229 18:57:33.427527   47608 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 18:57:33.446246   47608 ssh_runner.go:195] Run: openssl version
	I0229 18:57:33.455476   47608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13651.pem && ln -fs /usr/share/ca-certificates/13651.pem /etc/ssl/certs/13651.pem"
	I0229 18:57:33.473394   47608 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13651.pem
	I0229 18:57:33.478904   47608 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 17:49 /usr/share/ca-certificates/13651.pem
	I0229 18:57:33.478961   47608 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13651.pem
	I0229 18:57:33.485913   47608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13651.pem /etc/ssl/certs/51391683.0"
	I0229 18:57:33.499458   47608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136512.pem && ln -fs /usr/share/ca-certificates/136512.pem /etc/ssl/certs/136512.pem"
	I0229 18:57:33.512861   47608 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136512.pem
	I0229 18:57:33.518749   47608 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 17:49 /usr/share/ca-certificates/136512.pem
	I0229 18:57:33.518808   47608 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136512.pem
	I0229 18:57:33.525612   47608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136512.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 18:57:33.539397   47608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 18:57:33.552302   47608 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:57:33.557481   47608 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 17:40 /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:57:33.557543   47608 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:57:33.564226   47608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 18:57:33.577315   47608 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 18:57:33.582527   47608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0229 18:57:33.589246   47608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0229 18:57:33.595992   47608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0229 18:57:33.602535   47608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0229 18:57:33.609231   47608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0229 18:57:33.616292   47608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0229 18:57:33.623124   47608 kubeadm.go:404] StartCluster: {Name:embed-certs-991128 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-991128 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.34 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:57:33.623239   47608 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0229 18:57:33.623281   47608 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 18:57:33.663871   47608 cri.go:89] found id: ""
	I0229 18:57:33.663948   47608 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 18:57:33.676484   47608 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0229 18:57:33.676519   47608 kubeadm.go:636] restartCluster start
	I0229 18:57:33.676576   47608 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0229 18:57:33.690000   47608 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:33.690903   47608 kubeconfig.go:92] found "embed-certs-991128" server: "https://192.168.61.34:8443"
	I0229 18:57:33.692909   47608 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0229 18:57:33.706062   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:33.706162   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:33.722166   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:34.206285   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:34.206371   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:34.222736   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:34.706286   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:34.706415   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:34.721170   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:35.206815   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:35.206905   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:35.223777   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:34.946027   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:35.171546   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:57:35.171576   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:57:34.946337   48724 retry.go:31] will retry after 2.169140366s: waiting for machine to come up
	I0229 18:57:37.117080   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:37.117531   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:57:37.117564   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:57:37.117491   48724 retry.go:31] will retry after 2.187461538s: waiting for machine to come up
	I0229 18:57:39.307825   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:39.308159   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:57:39.308199   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:57:39.308131   48724 retry.go:31] will retry after 4.480150028s: waiting for machine to come up
	I0229 18:57:35.706239   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:35.706327   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:35.727095   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:36.206608   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:36.206718   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:36.220509   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:36.707149   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:36.707237   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:36.725852   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:37.206401   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:37.206530   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:37.225323   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:37.706920   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:37.707051   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:37.725340   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:38.207012   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:38.207113   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:38.225343   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:38.706906   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:38.706988   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:38.720820   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:39.206324   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:39.206399   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:39.220757   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:39.706274   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:39.706361   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:39.719994   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:40.206511   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:40.206589   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:40.219998   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:43.790597   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:43.791050   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:57:43.791076   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:57:43.790999   48724 retry.go:31] will retry after 3.830907426s: waiting for machine to come up
	I0229 18:57:40.706115   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:40.706262   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:40.719892   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:41.206440   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:41.206518   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:41.220057   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:41.706585   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:41.706677   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:41.720355   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:42.206977   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:42.207107   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:42.220629   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:42.706185   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:42.706266   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:42.720230   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:43.206832   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:43.206901   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:43.221019   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:43.706611   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:43.706693   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:43.720457   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:43.720489   47608 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0229 18:57:43.720501   47608 kubeadm.go:1135] stopping kube-system containers ...
	I0229 18:57:43.720515   47608 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0229 18:57:43.720572   47608 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 18:57:43.757509   47608 cri.go:89] found id: ""
	I0229 18:57:43.757592   47608 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0229 18:57:43.777950   47608 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 18:57:43.788404   47608 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 18:57:43.788455   47608 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 18:57:43.799322   47608 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0229 18:57:43.799340   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:57:43.907052   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:57:44.731907   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:57:44.940317   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:57:45.040382   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:57:45.113335   47608 api_server.go:52] waiting for apiserver process to appear ...
	I0229 18:57:45.113418   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:57:48.808893   48088 start.go:369] acquired machines lock for "default-k8s-diff-port-153528" in 4m9.434383703s
	I0229 18:57:48.808960   48088 start.go:96] Skipping create...Using existing machine configuration
	I0229 18:57:48.808973   48088 fix.go:54] fixHost starting: 
	I0229 18:57:48.809402   48088 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 18:57:48.809445   48088 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:57:48.829022   48088 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41617
	I0229 18:57:48.829448   48088 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:57:48.830097   48088 main.go:141] libmachine: Using API Version  1
	I0229 18:57:48.830129   48088 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:57:48.830547   48088 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:57:48.830766   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .DriverName
	I0229 18:57:48.830918   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetState
	I0229 18:57:48.832707   48088 fix.go:102] recreateIfNeeded on default-k8s-diff-port-153528: state=Stopped err=<nil>
	I0229 18:57:48.832733   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .DriverName
	W0229 18:57:48.832879   48088 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 18:57:48.834969   48088 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-153528" ...
	I0229 18:57:48.836190   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .Start
	I0229 18:57:48.836352   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Ensuring networks are active...
	I0229 18:57:48.837051   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Ensuring network default is active
	I0229 18:57:48.837440   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Ensuring network mk-default-k8s-diff-port-153528 is active
	I0229 18:57:48.837886   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Getting domain xml...
	I0229 18:57:48.838747   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Creating domain...
	I0229 18:57:47.623408   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:47.623861   47919 main.go:141] libmachine: (old-k8s-version-631080) Found IP for machine: 192.168.83.214
	I0229 18:57:47.623891   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has current primary IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:47.623900   47919 main.go:141] libmachine: (old-k8s-version-631080) Reserving static IP address...
	I0229 18:57:47.624340   47919 main.go:141] libmachine: (old-k8s-version-631080) Reserved static IP address: 192.168.83.214
	I0229 18:57:47.624374   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "old-k8s-version-631080", mac: "52:54:00:1b:b2:7e", ip: "192.168.83.214"} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:57:38 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:57:47.624390   47919 main.go:141] libmachine: (old-k8s-version-631080) Waiting for SSH to be available...
	I0229 18:57:47.624419   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | skip adding static IP to network mk-old-k8s-version-631080 - found existing host DHCP lease matching {name: "old-k8s-version-631080", mac: "52:54:00:1b:b2:7e", ip: "192.168.83.214"}
	I0229 18:57:47.624440   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | Getting to WaitForSSH function...
	I0229 18:57:47.626600   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:47.626881   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:57:38 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:57:47.626904   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:47.627042   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | Using SSH client type: external
	I0229 18:57:47.627070   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | Using SSH private key: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/old-k8s-version-631080/id_rsa (-rw-------)
	I0229 18:57:47.627106   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.214 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18259-6428/.minikube/machines/old-k8s-version-631080/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 18:57:47.627127   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | About to run SSH command:
	I0229 18:57:47.627146   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | exit 0
	I0229 18:57:47.751206   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | SSH cmd err, output: <nil>: 
	I0229 18:57:47.751569   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetConfigRaw
	I0229 18:57:47.752158   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetIP
	I0229 18:57:47.754701   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:47.755064   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:57:38 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:57:47.755089   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:47.755331   47919 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/config.json ...
	I0229 18:57:47.755551   47919 machine.go:88] provisioning docker machine ...
	I0229 18:57:47.755569   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .DriverName
	I0229 18:57:47.755772   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetMachineName
	I0229 18:57:47.755961   47919 buildroot.go:166] provisioning hostname "old-k8s-version-631080"
	I0229 18:57:47.755979   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetMachineName
	I0229 18:57:47.756102   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHHostname
	I0229 18:57:47.758421   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:47.758767   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:57:38 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:57:47.758796   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:47.758895   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHPort
	I0229 18:57:47.759065   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:57:47.759233   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:57:47.759387   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHUsername
	I0229 18:57:47.759548   47919 main.go:141] libmachine: Using SSH client type: native
	I0229 18:57:47.759718   47919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.83.214 22 <nil> <nil>}
	I0229 18:57:47.759730   47919 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-631080 && echo "old-k8s-version-631080" | sudo tee /etc/hostname
	I0229 18:57:47.879204   47919 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-631080
	
	I0229 18:57:47.879233   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHHostname
	I0229 18:57:47.881915   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:47.882207   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:57:38 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:57:47.882237   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:47.882407   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHPort
	I0229 18:57:47.882582   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:57:47.882737   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:57:47.882880   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHUsername
	I0229 18:57:47.883053   47919 main.go:141] libmachine: Using SSH client type: native
	I0229 18:57:47.883244   47919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.83.214 22 <nil> <nil>}
	I0229 18:57:47.883262   47919 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-631080' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-631080/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-631080' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 18:57:47.996920   47919 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 18:57:47.996948   47919 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18259-6428/.minikube CaCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18259-6428/.minikube}
	I0229 18:57:47.996964   47919 buildroot.go:174] setting up certificates
	I0229 18:57:47.996972   47919 provision.go:83] configureAuth start
	I0229 18:57:47.996980   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetMachineName
	I0229 18:57:47.997262   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetIP
	I0229 18:57:47.999702   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.000044   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:57:38 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:57:48.000076   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.000207   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHHostname
	I0229 18:57:48.002169   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.002457   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:57:38 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:57:48.002479   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.002552   47919 provision.go:138] copyHostCerts
	I0229 18:57:48.002600   47919 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem, removing ...
	I0229 18:57:48.002623   47919 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem
	I0229 18:57:48.002690   47919 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem (1082 bytes)
	I0229 18:57:48.002805   47919 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem, removing ...
	I0229 18:57:48.002820   47919 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem
	I0229 18:57:48.002854   47919 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem (1123 bytes)
	I0229 18:57:48.002936   47919 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem, removing ...
	I0229 18:57:48.002946   47919 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem
	I0229 18:57:48.002965   47919 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem (1675 bytes)
	I0229 18:57:48.003030   47919 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-631080 san=[192.168.83.214 192.168.83.214 localhost 127.0.0.1 minikube old-k8s-version-631080]
	I0229 18:57:48.095543   47919 provision.go:172] copyRemoteCerts
	I0229 18:57:48.095594   47919 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 18:57:48.095617   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHHostname
	I0229 18:57:48.098167   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.098411   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:57:38 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:57:48.098439   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.098593   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHPort
	I0229 18:57:48.098770   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:57:48.098910   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHUsername
	I0229 18:57:48.099046   47919 sshutil.go:53] new ssh client: &{IP:192.168.83.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/old-k8s-version-631080/id_rsa Username:docker}
	I0229 18:57:48.178774   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 18:57:48.204896   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0229 18:57:48.234660   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 18:57:48.264189   47919 provision.go:86] duration metric: configureAuth took 267.20486ms
	I0229 18:57:48.264215   47919 buildroot.go:189] setting minikube options for container-runtime
	I0229 18:57:48.264391   47919 config.go:182] Loaded profile config "old-k8s-version-631080": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0229 18:57:48.264464   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHHostname
	I0229 18:57:48.267066   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.267464   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:57:38 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:57:48.267500   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.267721   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHPort
	I0229 18:57:48.267913   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:57:48.268105   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:57:48.268260   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHUsername
	I0229 18:57:48.268425   47919 main.go:141] libmachine: Using SSH client type: native
	I0229 18:57:48.268630   47919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.83.214 22 <nil> <nil>}
	I0229 18:57:48.268658   47919 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0229 18:57:48.560376   47919 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0229 18:57:48.560401   47919 machine.go:91] provisioned docker machine in 804.837627ms
	I0229 18:57:48.560414   47919 start.go:300] post-start starting for "old-k8s-version-631080" (driver="kvm2")
	I0229 18:57:48.560426   47919 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 18:57:48.560450   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .DriverName
	I0229 18:57:48.560751   47919 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 18:57:48.560776   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHHostname
	I0229 18:57:48.563312   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.563638   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:57:38 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:57:48.563670   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.563776   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHPort
	I0229 18:57:48.563971   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:57:48.564126   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHUsername
	I0229 18:57:48.564295   47919 sshutil.go:53] new ssh client: &{IP:192.168.83.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/old-k8s-version-631080/id_rsa Username:docker}
	I0229 18:57:48.646996   47919 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 18:57:48.652329   47919 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 18:57:48.652356   47919 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6428/.minikube/addons for local assets ...
	I0229 18:57:48.652428   47919 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6428/.minikube/files for local assets ...
	I0229 18:57:48.652538   47919 filesync.go:149] local asset: /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem -> 136512.pem in /etc/ssl/certs
	I0229 18:57:48.652665   47919 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 18:57:48.663432   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem --> /etc/ssl/certs/136512.pem (1708 bytes)
	I0229 18:57:48.694980   47919 start.go:303] post-start completed in 134.554808ms
	I0229 18:57:48.695000   47919 fix.go:56] fixHost completed within 22.230801566s
	I0229 18:57:48.695033   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHHostname
	I0229 18:57:48.697788   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.698205   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:57:38 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:57:48.698231   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.698416   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHPort
	I0229 18:57:48.698633   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:57:48.698797   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:57:48.698941   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHUsername
	I0229 18:57:48.699118   47919 main.go:141] libmachine: Using SSH client type: native
	I0229 18:57:48.699327   47919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.83.214 22 <nil> <nil>}
	I0229 18:57:48.699349   47919 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 18:57:48.808714   47919 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709233068.793225740
	
	I0229 18:57:48.808740   47919 fix.go:206] guest clock: 1709233068.793225740
	I0229 18:57:48.808751   47919 fix.go:219] Guest: 2024-02-29 18:57:48.79322574 +0000 UTC Remote: 2024-02-29 18:57:48.695003912 +0000 UTC m=+273.807414604 (delta=98.221828ms)
	I0229 18:57:48.808793   47919 fix.go:190] guest clock delta is within tolerance: 98.221828ms
	I0229 18:57:48.808800   47919 start.go:83] releasing machines lock for "old-k8s-version-631080", held for 22.344627122s
	I0229 18:57:48.808832   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .DriverName
	I0229 18:57:48.809114   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetIP
	I0229 18:57:48.811872   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.812297   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:57:38 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:57:48.812336   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.812522   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .DriverName
	I0229 18:57:48.813072   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .DriverName
	I0229 18:57:48.813270   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .DriverName
	I0229 18:57:48.813347   47919 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 18:57:48.813392   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHHostname
	I0229 18:57:48.813509   47919 ssh_runner.go:195] Run: cat /version.json
	I0229 18:57:48.813536   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHHostname
	I0229 18:57:48.816200   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.816580   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:57:38 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:57:48.816607   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.816676   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.816753   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHPort
	I0229 18:57:48.816939   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:57:48.817097   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHUsername
	I0229 18:57:48.817244   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:57:38 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:57:48.817268   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.817293   47919 sshutil.go:53] new ssh client: &{IP:192.168.83.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/old-k8s-version-631080/id_rsa Username:docker}
	I0229 18:57:48.817420   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHPort
	I0229 18:57:48.817538   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:57:48.817643   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHUsername
	I0229 18:57:48.817769   47919 sshutil.go:53] new ssh client: &{IP:192.168.83.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/old-k8s-version-631080/id_rsa Username:docker}
	I0229 18:57:48.919708   47919 ssh_runner.go:195] Run: systemctl --version
	I0229 18:57:48.926381   47919 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0229 18:57:49.086263   47919 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 18:57:49.093350   47919 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 18:57:49.093427   47919 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 18:57:49.112686   47919 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 18:57:49.112716   47919 start.go:475] detecting cgroup driver to use...
	I0229 18:57:49.112784   47919 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 18:57:49.135232   47919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 18:57:49.152937   47919 docker.go:217] disabling cri-docker service (if available) ...
	I0229 18:57:49.152992   47919 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 18:57:49.172048   47919 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 18:57:49.190450   47919 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 18:57:49.341605   47919 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 18:57:49.539663   47919 docker.go:233] disabling docker service ...
	I0229 18:57:49.539733   47919 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 18:57:49.562110   47919 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 18:57:49.578761   47919 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 18:57:49.739044   47919 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 18:57:49.897866   47919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 18:57:49.918783   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 18:57:45.613998   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:57:46.114525   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:57:46.146283   47608 api_server.go:72] duration metric: took 1.032950423s to wait for apiserver process to appear ...
	I0229 18:57:46.146327   47608 api_server.go:88] waiting for apiserver healthz status ...
	I0229 18:57:46.146344   47608 api_server.go:253] Checking apiserver healthz at https://192.168.61.34:8443/healthz ...
	I0229 18:57:46.146876   47608 api_server.go:269] stopped: https://192.168.61.34:8443/healthz: Get "https://192.168.61.34:8443/healthz": dial tcp 192.168.61.34:8443: connect: connection refused
	I0229 18:57:46.646633   47608 api_server.go:253] Checking apiserver healthz at https://192.168.61.34:8443/healthz ...
	I0229 18:57:49.751381   47608 api_server.go:279] https://192.168.61.34:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 18:57:49.751410   47608 api_server.go:103] status: https://192.168.61.34:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 18:57:49.751427   47608 api_server.go:253] Checking apiserver healthz at https://192.168.61.34:8443/healthz ...
	I0229 18:57:49.791602   47608 api_server.go:279] https://192.168.61.34:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 18:57:49.791634   47608 api_server.go:103] status: https://192.168.61.34:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 18:57:50.147094   47608 api_server.go:253] Checking apiserver healthz at https://192.168.61.34:8443/healthz ...
	I0229 18:57:50.153644   47608 api_server.go:279] https://192.168.61.34:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 18:57:50.153671   47608 api_server.go:103] status: https://192.168.61.34:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 18:57:49.941241   47919 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0229 18:57:49.941328   47919 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:57:49.953131   47919 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0229 18:57:49.953215   47919 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:57:49.964850   47919 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:57:49.976035   47919 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:57:49.988017   47919 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 18:57:50.000990   47919 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 18:57:50.019124   47919 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 18:57:50.019177   47919 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0229 18:57:50.042881   47919 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 18:57:50.054219   47919 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:57:50.213793   47919 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0229 18:57:50.387473   47919 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0229 18:57:50.387536   47919 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0229 18:57:50.395113   47919 start.go:543] Will wait 60s for crictl version
	I0229 18:57:50.395177   47919 ssh_runner.go:195] Run: which crictl
	I0229 18:57:50.400166   47919 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 18:57:50.446910   47919 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0229 18:57:50.447015   47919 ssh_runner.go:195] Run: crio --version
	I0229 18:57:50.486139   47919 ssh_runner.go:195] Run: crio --version
	I0229 18:57:50.528290   47919 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.29.1 ...
	I0229 18:57:50.646967   47608 api_server.go:253] Checking apiserver healthz at https://192.168.61.34:8443/healthz ...
	I0229 18:57:50.660388   47608 api_server.go:279] https://192.168.61.34:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 18:57:50.660420   47608 api_server.go:103] status: https://192.168.61.34:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 18:57:51.146674   47608 api_server.go:253] Checking apiserver healthz at https://192.168.61.34:8443/healthz ...
	I0229 18:57:51.155154   47608 api_server.go:279] https://192.168.61.34:8443/healthz returned 200:
	ok
	I0229 18:57:51.166220   47608 api_server.go:141] control plane version: v1.28.4
	I0229 18:57:51.166255   47608 api_server.go:131] duration metric: took 5.019919259s to wait for apiserver health ...
	I0229 18:57:51.166267   47608 cni.go:84] Creating CNI manager for ""
	I0229 18:57:51.166277   47608 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 18:57:51.168259   47608 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 18:57:50.148417   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting to get IP...
	I0229 18:57:50.149211   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:57:50.149601   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | unable to find current IP address of domain default-k8s-diff-port-153528 in network mk-default-k8s-diff-port-153528
	I0229 18:57:50.149661   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | I0229 18:57:50.149584   48864 retry.go:31] will retry after 287.925969ms: waiting for machine to come up
	I0229 18:57:50.439389   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:57:50.440003   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | unable to find current IP address of domain default-k8s-diff-port-153528 in network mk-default-k8s-diff-port-153528
	I0229 18:57:50.440033   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | I0229 18:57:50.439944   48864 retry.go:31] will retry after 341.540721ms: waiting for machine to come up
	I0229 18:57:50.783988   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:57:50.784594   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | unable to find current IP address of domain default-k8s-diff-port-153528 in network mk-default-k8s-diff-port-153528
	I0229 18:57:50.784622   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | I0229 18:57:50.784544   48864 retry.go:31] will retry after 344.053696ms: waiting for machine to come up
	I0229 18:57:51.130288   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:57:51.131126   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | unable to find current IP address of domain default-k8s-diff-port-153528 in network mk-default-k8s-diff-port-153528
	I0229 18:57:51.131152   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | I0229 18:57:51.131075   48864 retry.go:31] will retry after 593.843769ms: waiting for machine to come up
	I0229 18:57:51.726464   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:57:51.726974   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | unable to find current IP address of domain default-k8s-diff-port-153528 in network mk-default-k8s-diff-port-153528
	I0229 18:57:51.727000   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | I0229 18:57:51.726879   48864 retry.go:31] will retry after 689.199247ms: waiting for machine to come up
	I0229 18:57:52.418297   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:57:52.418801   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | unable to find current IP address of domain default-k8s-diff-port-153528 in network mk-default-k8s-diff-port-153528
	I0229 18:57:52.418829   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | I0229 18:57:52.418753   48864 retry.go:31] will retry after 737.671716ms: waiting for machine to come up
	I0229 18:57:53.158161   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:57:53.158573   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | unable to find current IP address of domain default-k8s-diff-port-153528 in network mk-default-k8s-diff-port-153528
	I0229 18:57:53.158618   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | I0229 18:57:53.158521   48864 retry.go:31] will retry after 1.18162067s: waiting for machine to come up
	I0229 18:57:50.530077   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetIP
	I0229 18:57:50.533389   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:50.533761   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:57:38 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:57:50.533794   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:50.534001   47919 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0229 18:57:50.538857   47919 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:57:50.556961   47919 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0229 18:57:50.557028   47919 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 18:57:50.616925   47919 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0229 18:57:50.617001   47919 ssh_runner.go:195] Run: which lz4
	I0229 18:57:50.622857   47919 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0229 18:57:50.628035   47919 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 18:57:50.628070   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0229 18:57:52.679575   47919 crio.go:444] Took 2.056751 seconds to copy over tarball
	I0229 18:57:52.679656   47919 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 18:57:51.169655   47608 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 18:57:51.184521   47608 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 18:57:51.215791   47608 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 18:57:51.235050   47608 system_pods.go:59] 8 kube-system pods found
	I0229 18:57:51.235136   47608 system_pods.go:61] "coredns-5dd5756b68-6b5pm" [d8023f3b-fc07-4dd4-98dc-bd321d137a06] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 18:57:51.235150   47608 system_pods.go:61] "etcd-embed-certs-991128" [01a1ee82-a650-4736-8fb9-e983427bef96] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0229 18:57:51.235161   47608 system_pods.go:61] "kube-apiserver-embed-certs-991128" [a6810e01-a958-4e7b-ba0f-6cd2e747b998] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0229 18:57:51.235170   47608 system_pods.go:61] "kube-controller-manager-embed-certs-991128" [6469e9c8-7372-4756-926d-0de600c8ed4f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0229 18:57:51.235179   47608 system_pods.go:61] "kube-proxy-zd7rf" [963b5fb6-f287-4c80-a324-b0cb09b1ae97] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0229 18:57:51.235190   47608 system_pods.go:61] "kube-scheduler-embed-certs-991128" [ac2e7c83-6e96-46ba-aeed-c847d312ba4c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0229 18:57:51.235199   47608 system_pods.go:61] "metrics-server-57f55c9bc5-5w6c9" [6ddb9b39-e1d1-4d34-bb45-e9a5c161f13d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 18:57:51.235220   47608 system_pods.go:61] "storage-provisioner" [99d0cbe5-bb8b-472b-be91-9f29442c8c1d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0229 18:57:51.235243   47608 system_pods.go:74] duration metric: took 19.430245ms to wait for pod list to return data ...
	I0229 18:57:51.235257   47608 node_conditions.go:102] verifying NodePressure condition ...
	I0229 18:57:51.241823   47608 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 18:57:51.241849   47608 node_conditions.go:123] node cpu capacity is 2
	I0229 18:57:51.241863   47608 node_conditions.go:105] duration metric: took 6.600606ms to run NodePressure ...
	I0229 18:57:51.241884   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:57:51.654038   47608 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0229 18:57:51.663120   47608 kubeadm.go:787] kubelet initialised
	I0229 18:57:51.663146   47608 kubeadm.go:788] duration metric: took 9.079921ms waiting for restarted kubelet to initialise ...
	I0229 18:57:51.663156   47608 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 18:57:51.671417   47608 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-6b5pm" in "kube-system" namespace to be "Ready" ...
	I0229 18:57:53.679921   47608 pod_ready.go:102] pod "coredns-5dd5756b68-6b5pm" in "kube-system" namespace has status "Ready":"False"
	I0229 18:57:54.342488   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:57:54.342981   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | unable to find current IP address of domain default-k8s-diff-port-153528 in network mk-default-k8s-diff-port-153528
	I0229 18:57:54.343006   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | I0229 18:57:54.342931   48864 retry.go:31] will retry after 1.180730966s: waiting for machine to come up
	I0229 18:57:55.524954   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:57:55.525398   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | unable to find current IP address of domain default-k8s-diff-port-153528 in network mk-default-k8s-diff-port-153528
	I0229 18:57:55.525427   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | I0229 18:57:55.525338   48864 retry.go:31] will retry after 1.706902899s: waiting for machine to come up
	I0229 18:57:57.233340   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:57:57.233834   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | unable to find current IP address of domain default-k8s-diff-port-153528 in network mk-default-k8s-diff-port-153528
	I0229 18:57:57.233862   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | I0229 18:57:57.233791   48864 retry.go:31] will retry after 2.281506267s: waiting for machine to come up
	I0229 18:57:55.661321   47919 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.981628592s)
	I0229 18:57:55.661351   47919 crio.go:451] Took 2.981744 seconds to extract the tarball
	I0229 18:57:55.661363   47919 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 18:57:55.708924   47919 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 18:57:55.751627   47919 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0229 18:57:55.751650   47919 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0229 18:57:55.751726   47919 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:57:55.751752   47919 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 18:57:55.751758   47919 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0229 18:57:55.751735   47919 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 18:57:55.751751   47919 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0229 18:57:55.751772   47919 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 18:57:55.751864   47919 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0229 18:57:55.752153   47919 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 18:57:55.753139   47919 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0229 18:57:55.753456   47919 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:57:55.753467   47919 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 18:57:55.753476   47919 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 18:57:55.753476   47919 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 18:57:55.753476   47919 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0229 18:57:55.753486   47919 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0229 18:57:55.753547   47919 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 18:57:55.934620   47919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0229 18:57:55.988723   47919 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0229 18:57:55.988767   47919 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0229 18:57:55.988811   47919 ssh_runner.go:195] Run: which crictl
	I0229 18:57:55.993750   47919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0229 18:57:56.036192   47919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0229 18:57:56.037872   47919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0229 18:57:56.038123   47919 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0229 18:57:56.040846   47919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0229 18:57:56.046242   47919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0229 18:57:56.065126   47919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 18:57:56.077683   47919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0229 18:57:56.126642   47919 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0229 18:57:56.126683   47919 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 18:57:56.126741   47919 ssh_runner.go:195] Run: which crictl
	I0229 18:57:56.191928   47919 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0229 18:57:56.191980   47919 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 18:57:56.192006   47919 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0229 18:57:56.192037   47919 ssh_runner.go:195] Run: which crictl
	I0229 18:57:56.192045   47919 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0229 18:57:56.192086   47919 ssh_runner.go:195] Run: which crictl
	I0229 18:57:56.203773   47919 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0229 18:57:56.203819   47919 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 18:57:56.203863   47919 ssh_runner.go:195] Run: which crictl
	I0229 18:57:56.227761   47919 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0229 18:57:56.227799   47919 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 18:57:56.227832   47919 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0229 18:57:56.227856   47919 ssh_runner.go:195] Run: which crictl
	I0229 18:57:56.227864   47919 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0229 18:57:56.227876   47919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0229 18:57:56.227922   47919 ssh_runner.go:195] Run: which crictl
	I0229 18:57:56.227925   47919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0229 18:57:56.227956   47919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0229 18:57:56.227961   47919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0229 18:57:56.246645   47919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0229 18:57:56.344012   47919 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0229 18:57:56.344125   47919 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0229 18:57:56.346352   47919 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0229 18:57:56.361309   47919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 18:57:56.361484   47919 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0229 18:57:56.383942   47919 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0229 18:57:56.411697   47919 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0229 18:57:56.649625   47919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:57:56.801430   47919 cache_images.go:92] LoadImages completed in 1.049765957s
	W0229 18:57:56.801578   47919 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0: no such file or directory
	I0229 18:57:56.801670   47919 ssh_runner.go:195] Run: crio config
	I0229 18:57:56.872210   47919 cni.go:84] Creating CNI manager for ""
	I0229 18:57:56.872238   47919 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 18:57:56.872260   47919 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 18:57:56.872283   47919 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.214 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-631080 NodeName:old-k8s-version-631080 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.214"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.214 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0229 18:57:56.872458   47919 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.214
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-631080"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.214
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.214"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-631080
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.83.214:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 18:57:56.872545   47919 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-631080 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.83.214
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-631080 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 18:57:56.872620   47919 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0229 18:57:56.884571   47919 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 18:57:56.884647   47919 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 18:57:56.896167   47919 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0229 18:57:56.916824   47919 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 18:57:56.938739   47919 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I0229 18:57:56.961411   47919 ssh_runner.go:195] Run: grep 192.168.83.214	control-plane.minikube.internal$ /etc/hosts
	I0229 18:57:56.966134   47919 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.214	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:57:56.981089   47919 certs.go:56] Setting up /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080 for IP: 192.168.83.214
	I0229 18:57:56.981121   47919 certs.go:190] acquiring lock for shared ca certs: {Name:mk3505d2468da66ac20dc3ebf913782cecf1ba0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:57:56.981295   47919 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18259-6428/.minikube/ca.key
	I0229 18:57:56.981358   47919 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.key
	I0229 18:57:56.981465   47919 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/client.key
	I0229 18:57:56.981533   47919 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/apiserver.key.89a58109
	I0229 18:57:56.981586   47919 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/proxy-client.key
	I0229 18:57:56.981755   47919 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651.pem (1338 bytes)
	W0229 18:57:56.981791   47919 certs.go:433] ignoring /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651_empty.pem, impossibly tiny 0 bytes
	I0229 18:57:56.981806   47919 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem (1675 bytes)
	I0229 18:57:56.981845   47919 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem (1082 bytes)
	I0229 18:57:56.981878   47919 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem (1123 bytes)
	I0229 18:57:56.981910   47919 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem (1675 bytes)
	I0229 18:57:56.981961   47919 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem (1708 bytes)
	I0229 18:57:56.982889   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 18:57:57.015587   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0229 18:57:57.048698   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 18:57:57.078634   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 18:57:57.114008   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 18:57:57.146884   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0229 18:57:57.179560   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 18:57:57.211989   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0229 18:57:57.246936   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651.pem --> /usr/share/ca-certificates/13651.pem (1338 bytes)
	I0229 18:57:57.280651   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem --> /usr/share/ca-certificates/136512.pem (1708 bytes)
	I0229 18:57:57.310050   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 18:57:57.337439   47919 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 18:57:57.359100   47919 ssh_runner.go:195] Run: openssl version
	I0229 18:57:57.366111   47919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13651.pem && ln -fs /usr/share/ca-certificates/13651.pem /etc/ssl/certs/13651.pem"
	I0229 18:57:57.380593   47919 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13651.pem
	I0229 18:57:57.386703   47919 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 17:49 /usr/share/ca-certificates/13651.pem
	I0229 18:57:57.386771   47919 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13651.pem
	I0229 18:57:57.401429   47919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13651.pem /etc/ssl/certs/51391683.0"
	I0229 18:57:57.416516   47919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136512.pem && ln -fs /usr/share/ca-certificates/136512.pem /etc/ssl/certs/136512.pem"
	I0229 18:57:57.429645   47919 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136512.pem
	I0229 18:57:57.434960   47919 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 17:49 /usr/share/ca-certificates/136512.pem
	I0229 18:57:57.435010   47919 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136512.pem
	I0229 18:57:57.441855   47919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136512.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 18:57:57.457277   47919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 18:57:57.471345   47919 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:57:57.476556   47919 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 17:40 /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:57:57.476629   47919 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:57:57.483318   47919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 18:57:57.496355   47919 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 18:57:57.501976   47919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0229 18:57:57.509611   47919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0229 18:57:57.516861   47919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0229 18:57:57.523819   47919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0229 18:57:57.530959   47919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0229 18:57:57.539788   47919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0229 18:57:57.548575   47919 kubeadm.go:404] StartCluster: {Name:old-k8s-version-631080 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-631080 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.83.214 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:57:57.548663   47919 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0229 18:57:57.548731   47919 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 18:57:57.596234   47919 cri.go:89] found id: ""
	I0229 18:57:57.596327   47919 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 18:57:57.612827   47919 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0229 18:57:57.612856   47919 kubeadm.go:636] restartCluster start
	I0229 18:57:57.612903   47919 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0229 18:57:57.627565   47919 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:57.629049   47919 kubeconfig.go:135] verify returned: extract IP: "old-k8s-version-631080" does not appear in /home/jenkins/minikube-integration/18259-6428/kubeconfig
	I0229 18:57:57.630139   47919 kubeconfig.go:146] "old-k8s-version-631080" context is missing from /home/jenkins/minikube-integration/18259-6428/kubeconfig - will repair!
	I0229 18:57:57.631735   47919 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/kubeconfig: {Name:mk7125f243525b7f0feb85371d9c568ed8e0cf7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:57:57.634318   47919 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0229 18:57:57.648383   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:57:57.648458   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:57.663708   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:58.149010   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:57:58.149086   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:58.164430   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:58.649075   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:57:58.649186   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:58.663768   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:59.149370   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:57:59.149450   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:59.165089   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:59.648609   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:57:59.648690   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:59.665224   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:56.182137   47608 pod_ready.go:102] pod "coredns-5dd5756b68-6b5pm" in "kube-system" namespace has status "Ready":"False"
	I0229 18:57:58.681550   47608 pod_ready.go:102] pod "coredns-5dd5756b68-6b5pm" in "kube-system" namespace has status "Ready":"False"
	I0229 18:57:59.517428   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:57:59.518040   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | unable to find current IP address of domain default-k8s-diff-port-153528 in network mk-default-k8s-diff-port-153528
	I0229 18:57:59.518069   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | I0229 18:57:59.517984   48864 retry.go:31] will retry after 2.738727804s: waiting for machine to come up
	I0229 18:58:02.258042   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:02.258540   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | unable to find current IP address of domain default-k8s-diff-port-153528 in network mk-default-k8s-diff-port-153528
	I0229 18:58:02.258569   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | I0229 18:58:02.258498   48864 retry.go:31] will retry after 2.520892118s: waiting for machine to come up
	I0229 18:58:00.148880   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:00.148969   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:00.168561   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:00.649227   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:00.649308   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:00.668162   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:01.148539   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:01.148600   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:01.168347   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:01.649392   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:01.649484   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:01.663974   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:02.149462   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:02.149548   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:02.164757   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:02.649398   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:02.649522   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:02.664014   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:03.148502   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:03.148718   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:03.165374   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:03.648528   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:03.648594   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:03.663305   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:04.148760   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:04.148847   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:04.163480   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:04.649122   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:04.649226   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:04.663556   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:01.179941   47608 pod_ready.go:102] pod "coredns-5dd5756b68-6b5pm" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:03.679523   47608 pod_ready.go:102] pod "coredns-5dd5756b68-6b5pm" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:04.179171   47608 pod_ready.go:92] pod "coredns-5dd5756b68-6b5pm" in "kube-system" namespace has status "Ready":"True"
	I0229 18:58:04.179198   47608 pod_ready.go:81] duration metric: took 12.507755709s waiting for pod "coredns-5dd5756b68-6b5pm" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:04.179212   47608 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-991128" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:04.184638   47608 pod_ready.go:92] pod "etcd-embed-certs-991128" in "kube-system" namespace has status "Ready":"True"
	I0229 18:58:04.184657   47608 pod_ready.go:81] duration metric: took 5.438559ms waiting for pod "etcd-embed-certs-991128" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:04.184665   47608 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-991128" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:04.189119   47608 pod_ready.go:92] pod "kube-apiserver-embed-certs-991128" in "kube-system" namespace has status "Ready":"True"
	I0229 18:58:04.189139   47608 pod_ready.go:81] duration metric: took 4.467998ms waiting for pod "kube-apiserver-embed-certs-991128" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:04.189147   47608 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-991128" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:04.193800   47608 pod_ready.go:92] pod "kube-controller-manager-embed-certs-991128" in "kube-system" namespace has status "Ready":"True"
	I0229 18:58:04.193819   47608 pod_ready.go:81] duration metric: took 4.66771ms waiting for pod "kube-controller-manager-embed-certs-991128" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:04.193827   47608 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zd7rf" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:04.198220   47608 pod_ready.go:92] pod "kube-proxy-zd7rf" in "kube-system" namespace has status "Ready":"True"
	I0229 18:58:04.198239   47608 pod_ready.go:81] duration metric: took 4.405824ms waiting for pod "kube-proxy-zd7rf" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:04.198246   47608 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-991128" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:04.575846   47608 pod_ready.go:92] pod "kube-scheduler-embed-certs-991128" in "kube-system" namespace has status "Ready":"True"
	I0229 18:58:04.575869   47608 pod_ready.go:81] duration metric: took 377.617228ms waiting for pod "kube-scheduler-embed-certs-991128" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:04.575878   47608 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:04.780871   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:04.781307   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | unable to find current IP address of domain default-k8s-diff-port-153528 in network mk-default-k8s-diff-port-153528
	I0229 18:58:04.781334   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | I0229 18:58:04.781266   48864 retry.go:31] will retry after 3.73331916s: waiting for machine to come up
	I0229 18:58:08.519173   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:08.519646   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Found IP for machine: 192.168.39.210
	I0229 18:58:08.519666   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Reserving static IP address...
	I0229 18:58:08.519687   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has current primary IP address 192.168.39.210 and MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:08.520011   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-153528", mac: "52:54:00:78:ec:2b", ip: "192.168.39.210"} in network mk-default-k8s-diff-port-153528: {Iface:virbr3 ExpiryTime:2024-02-29 19:58:02 +0000 UTC Type:0 Mac:52:54:00:78:ec:2b Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:default-k8s-diff-port-153528 Clientid:01:52:54:00:78:ec:2b}
	I0229 18:58:08.520032   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Reserved static IP address: 192.168.39.210
	I0229 18:58:08.520046   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | skip adding static IP to network mk-default-k8s-diff-port-153528 - found existing host DHCP lease matching {name: "default-k8s-diff-port-153528", mac: "52:54:00:78:ec:2b", ip: "192.168.39.210"}
	I0229 18:58:08.520057   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | Getting to WaitForSSH function...
	I0229 18:58:08.520067   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for SSH to be available...
	I0229 18:58:08.522047   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:08.522377   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ec:2b", ip: ""} in network mk-default-k8s-diff-port-153528: {Iface:virbr3 ExpiryTime:2024-02-29 19:58:02 +0000 UTC Type:0 Mac:52:54:00:78:ec:2b Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:default-k8s-diff-port-153528 Clientid:01:52:54:00:78:ec:2b}
	I0229 18:58:08.522411   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined IP address 192.168.39.210 and MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:08.522529   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | Using SSH client type: external
	I0229 18:58:08.522555   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | Using SSH private key: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/default-k8s-diff-port-153528/id_rsa (-rw-------)
	I0229 18:58:08.522592   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.210 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18259-6428/.minikube/machines/default-k8s-diff-port-153528/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 18:58:08.522606   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | About to run SSH command:
	I0229 18:58:08.522616   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | exit 0
	I0229 18:58:08.651113   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | SSH cmd err, output: <nil>: 
	I0229 18:58:08.651447   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetConfigRaw
	I0229 18:58:08.652078   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetIP
	I0229 18:58:08.654739   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:08.655191   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ec:2b", ip: ""} in network mk-default-k8s-diff-port-153528: {Iface:virbr3 ExpiryTime:2024-02-29 19:58:02 +0000 UTC Type:0 Mac:52:54:00:78:ec:2b Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:default-k8s-diff-port-153528 Clientid:01:52:54:00:78:ec:2b}
	I0229 18:58:08.655222   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined IP address 192.168.39.210 and MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:08.655516   48088 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/default-k8s-diff-port-153528/config.json ...
	I0229 18:58:08.655758   48088 machine.go:88] provisioning docker machine ...
	I0229 18:58:08.655787   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .DriverName
	I0229 18:58:08.655976   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetMachineName
	I0229 18:58:08.656109   48088 buildroot.go:166] provisioning hostname "default-k8s-diff-port-153528"
	I0229 18:58:08.656127   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetMachineName
	I0229 18:58:08.656273   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHHostname
	I0229 18:58:08.658580   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:08.658933   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ec:2b", ip: ""} in network mk-default-k8s-diff-port-153528: {Iface:virbr3 ExpiryTime:2024-02-29 19:58:02 +0000 UTC Type:0 Mac:52:54:00:78:ec:2b Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:default-k8s-diff-port-153528 Clientid:01:52:54:00:78:ec:2b}
	I0229 18:58:08.658961   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined IP address 192.168.39.210 and MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:08.659066   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHPort
	I0229 18:58:08.659255   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHKeyPath
	I0229 18:58:08.659419   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHKeyPath
	I0229 18:58:08.659547   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHUsername
	I0229 18:58:08.659714   48088 main.go:141] libmachine: Using SSH client type: native
	I0229 18:58:08.659933   48088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0229 18:58:08.659952   48088 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-153528 && echo "default-k8s-diff-port-153528" | sudo tee /etc/hostname
	I0229 18:58:08.782704   48088 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-153528
	
	I0229 18:58:08.782727   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHHostname
	I0229 18:58:08.785588   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:08.785939   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ec:2b", ip: ""} in network mk-default-k8s-diff-port-153528: {Iface:virbr3 ExpiryTime:2024-02-29 19:58:02 +0000 UTC Type:0 Mac:52:54:00:78:ec:2b Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:default-k8s-diff-port-153528 Clientid:01:52:54:00:78:ec:2b}
	I0229 18:58:08.785967   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined IP address 192.168.39.210 and MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:08.786107   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHPort
	I0229 18:58:08.786290   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHKeyPath
	I0229 18:58:08.786430   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHKeyPath
	I0229 18:58:08.786550   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHUsername
	I0229 18:58:08.786675   48088 main.go:141] libmachine: Using SSH client type: native
	I0229 18:58:08.786910   48088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0229 18:58:08.786937   48088 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-153528' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-153528/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-153528' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 18:58:08.906593   48088 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 18:58:08.906630   48088 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18259-6428/.minikube CaCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18259-6428/.minikube}
	I0229 18:58:08.906671   48088 buildroot.go:174] setting up certificates
	I0229 18:58:08.906683   48088 provision.go:83] configureAuth start
	I0229 18:58:08.906700   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetMachineName
	I0229 18:58:08.906992   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetIP
	I0229 18:58:08.909897   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:08.910266   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ec:2b", ip: ""} in network mk-default-k8s-diff-port-153528: {Iface:virbr3 ExpiryTime:2024-02-29 19:58:02 +0000 UTC Type:0 Mac:52:54:00:78:ec:2b Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:default-k8s-diff-port-153528 Clientid:01:52:54:00:78:ec:2b}
	I0229 18:58:08.910299   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined IP address 192.168.39.210 and MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:08.910420   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHHostname
	I0229 18:58:08.912899   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:08.913333   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ec:2b", ip: ""} in network mk-default-k8s-diff-port-153528: {Iface:virbr3 ExpiryTime:2024-02-29 19:58:02 +0000 UTC Type:0 Mac:52:54:00:78:ec:2b Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:default-k8s-diff-port-153528 Clientid:01:52:54:00:78:ec:2b}
	I0229 18:58:08.913363   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined IP address 192.168.39.210 and MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:08.913526   48088 provision.go:138] copyHostCerts
	I0229 18:58:08.913589   48088 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem, removing ...
	I0229 18:58:08.913602   48088 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem
	I0229 18:58:08.913671   48088 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem (1082 bytes)
	I0229 18:58:08.913796   48088 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem, removing ...
	I0229 18:58:08.913808   48088 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem
	I0229 18:58:08.913838   48088 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem (1123 bytes)
	I0229 18:58:08.913920   48088 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem, removing ...
	I0229 18:58:08.913940   48088 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem
	I0229 18:58:08.913969   48088 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem (1675 bytes)
	I0229 18:58:08.914052   48088 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-153528 san=[192.168.39.210 192.168.39.210 localhost 127.0.0.1 minikube default-k8s-diff-port-153528]
	I0229 18:58:09.033009   48088 provision.go:172] copyRemoteCerts
	I0229 18:58:09.033064   48088 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 18:58:09.033087   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHHostname
	I0229 18:58:09.035647   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:09.036023   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ec:2b", ip: ""} in network mk-default-k8s-diff-port-153528: {Iface:virbr3 ExpiryTime:2024-02-29 19:58:02 +0000 UTC Type:0 Mac:52:54:00:78:ec:2b Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:default-k8s-diff-port-153528 Clientid:01:52:54:00:78:ec:2b}
	I0229 18:58:09.036061   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined IP address 192.168.39.210 and MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:09.036262   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHPort
	I0229 18:58:09.036434   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHKeyPath
	I0229 18:58:09.036582   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHUsername
	I0229 18:58:09.036685   48088 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/default-k8s-diff-port-153528/id_rsa Username:docker}
	I0229 18:58:09.127168   48088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 18:58:09.162113   48088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0229 18:58:09.191657   48088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0229 18:58:09.224555   48088 provision.go:86] duration metric: configureAuth took 317.8564ms
	I0229 18:58:09.224589   48088 buildroot.go:189] setting minikube options for container-runtime
	I0229 18:58:09.224789   48088 config.go:182] Loaded profile config "default-k8s-diff-port-153528": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 18:58:09.224877   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHHostname
	I0229 18:58:09.227193   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:09.227549   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ec:2b", ip: ""} in network mk-default-k8s-diff-port-153528: {Iface:virbr3 ExpiryTime:2024-02-29 19:58:02 +0000 UTC Type:0 Mac:52:54:00:78:ec:2b Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:default-k8s-diff-port-153528 Clientid:01:52:54:00:78:ec:2b}
	I0229 18:58:09.227577   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined IP address 192.168.39.210 and MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:09.227731   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHPort
	I0229 18:58:09.227950   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHKeyPath
	I0229 18:58:09.228111   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHKeyPath
	I0229 18:58:09.228266   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHUsername
	I0229 18:58:09.228398   48088 main.go:141] libmachine: Using SSH client type: native
	I0229 18:58:09.228595   48088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0229 18:58:09.228617   48088 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0229 18:58:09.760261   47515 start.go:369] acquired machines lock for "no-preload-247197" in 59.368392801s
	I0229 18:58:09.760316   47515 start.go:96] Skipping create...Using existing machine configuration
	I0229 18:58:09.760326   47515 fix.go:54] fixHost starting: 
	I0229 18:58:09.760731   47515 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 18:58:09.760768   47515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:58:09.777304   47515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45123
	I0229 18:58:09.777781   47515 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:58:09.778277   47515 main.go:141] libmachine: Using API Version  1
	I0229 18:58:09.778301   47515 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:58:09.778655   47515 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:58:09.778829   47515 main.go:141] libmachine: (no-preload-247197) Calling .DriverName
	I0229 18:58:09.779012   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetState
	I0229 18:58:09.780644   47515 fix.go:102] recreateIfNeeded on no-preload-247197: state=Stopped err=<nil>
	I0229 18:58:09.780670   47515 main.go:141] libmachine: (no-preload-247197) Calling .DriverName
	W0229 18:58:09.780844   47515 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 18:58:09.782653   47515 out.go:177] * Restarting existing kvm2 VM for "no-preload-247197" ...
	I0229 18:58:05.149421   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:05.149514   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:05.164236   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:05.648767   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:05.648856   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:05.664890   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:06.148979   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:06.149069   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:06.165186   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:06.649135   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:06.649245   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:06.665357   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:07.148896   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:07.148978   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:07.163358   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:07.649238   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:07.649309   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:07.665329   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:07.665359   47919 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0229 18:58:07.665368   47919 kubeadm.go:1135] stopping kube-system containers ...
	I0229 18:58:07.665378   47919 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0229 18:58:07.665433   47919 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 18:58:07.713980   47919 cri.go:89] found id: ""
	I0229 18:58:07.714045   47919 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0229 18:58:07.740723   47919 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 18:58:07.753838   47919 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 18:58:07.753914   47919 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 18:58:07.767175   47919 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0229 18:58:07.767197   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:58:07.902881   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:58:08.741237   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:58:08.970287   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:58:09.099101   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:58:09.214816   47919 api_server.go:52] waiting for apiserver process to appear ...
	I0229 18:58:09.214897   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:09.715311   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:06.583750   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:09.083063   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:09.517694   48088 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0229 18:58:09.517720   48088 machine.go:91] provisioned docker machine in 861.950931ms
	I0229 18:58:09.517732   48088 start.go:300] post-start starting for "default-k8s-diff-port-153528" (driver="kvm2")
	I0229 18:58:09.517742   48088 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 18:58:09.517755   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .DriverName
	I0229 18:58:09.518097   48088 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 18:58:09.518134   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHHostname
	I0229 18:58:09.520915   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:09.521255   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ec:2b", ip: ""} in network mk-default-k8s-diff-port-153528: {Iface:virbr3 ExpiryTime:2024-02-29 19:58:02 +0000 UTC Type:0 Mac:52:54:00:78:ec:2b Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:default-k8s-diff-port-153528 Clientid:01:52:54:00:78:ec:2b}
	I0229 18:58:09.521285   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined IP address 192.168.39.210 and MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:09.521389   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHPort
	I0229 18:58:09.521590   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHKeyPath
	I0229 18:58:09.521761   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHUsername
	I0229 18:58:09.521911   48088 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/default-k8s-diff-port-153528/id_rsa Username:docker}
	I0229 18:58:09.606485   48088 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 18:58:09.611376   48088 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 18:58:09.611404   48088 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6428/.minikube/addons for local assets ...
	I0229 18:58:09.611468   48088 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6428/.minikube/files for local assets ...
	I0229 18:58:09.611564   48088 filesync.go:149] local asset: /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem -> 136512.pem in /etc/ssl/certs
	I0229 18:58:09.611679   48088 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 18:58:09.621573   48088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem --> /etc/ssl/certs/136512.pem (1708 bytes)
	I0229 18:58:09.648803   48088 start.go:303] post-start completed in 131.058856ms
	I0229 18:58:09.648825   48088 fix.go:56] fixHost completed within 20.839852585s
	I0229 18:58:09.648848   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHHostname
	I0229 18:58:09.651416   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:09.651743   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ec:2b", ip: ""} in network mk-default-k8s-diff-port-153528: {Iface:virbr3 ExpiryTime:2024-02-29 19:58:02 +0000 UTC Type:0 Mac:52:54:00:78:ec:2b Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:default-k8s-diff-port-153528 Clientid:01:52:54:00:78:ec:2b}
	I0229 18:58:09.651771   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined IP address 192.168.39.210 and MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:09.651917   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHPort
	I0229 18:58:09.652114   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHKeyPath
	I0229 18:58:09.652273   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHKeyPath
	I0229 18:58:09.652392   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHUsername
	I0229 18:58:09.652563   48088 main.go:141] libmachine: Using SSH client type: native
	I0229 18:58:09.652715   48088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0229 18:58:09.652728   48088 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 18:58:09.760132   48088 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709233089.743154671
	
	I0229 18:58:09.760154   48088 fix.go:206] guest clock: 1709233089.743154671
	I0229 18:58:09.760160   48088 fix.go:219] Guest: 2024-02-29 18:58:09.743154671 +0000 UTC Remote: 2024-02-29 18:58:09.648829212 +0000 UTC m=+270.421886207 (delta=94.325459ms)
	I0229 18:58:09.760177   48088 fix.go:190] guest clock delta is within tolerance: 94.325459ms
	I0229 18:58:09.760183   48088 start.go:83] releasing machines lock for "default-k8s-diff-port-153528", held for 20.951247697s
	I0229 18:58:09.760211   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .DriverName
	I0229 18:58:09.760473   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetIP
	I0229 18:58:09.763342   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:09.763701   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ec:2b", ip: ""} in network mk-default-k8s-diff-port-153528: {Iface:virbr3 ExpiryTime:2024-02-29 19:58:02 +0000 UTC Type:0 Mac:52:54:00:78:ec:2b Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:default-k8s-diff-port-153528 Clientid:01:52:54:00:78:ec:2b}
	I0229 18:58:09.763746   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined IP address 192.168.39.210 and MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:09.763896   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .DriverName
	I0229 18:58:09.764519   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .DriverName
	I0229 18:58:09.764720   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .DriverName
	I0229 18:58:09.764801   48088 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 18:58:09.764849   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHHostname
	I0229 18:58:09.764951   48088 ssh_runner.go:195] Run: cat /version.json
	I0229 18:58:09.764981   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHHostname
	I0229 18:58:09.767670   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:09.767861   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:09.768035   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ec:2b", ip: ""} in network mk-default-k8s-diff-port-153528: {Iface:virbr3 ExpiryTime:2024-02-29 19:58:02 +0000 UTC Type:0 Mac:52:54:00:78:ec:2b Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:default-k8s-diff-port-153528 Clientid:01:52:54:00:78:ec:2b}
	I0229 18:58:09.768054   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined IP address 192.168.39.210 and MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:09.768204   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHPort
	I0229 18:58:09.768322   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ec:2b", ip: ""} in network mk-default-k8s-diff-port-153528: {Iface:virbr3 ExpiryTime:2024-02-29 19:58:02 +0000 UTC Type:0 Mac:52:54:00:78:ec:2b Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:default-k8s-diff-port-153528 Clientid:01:52:54:00:78:ec:2b}
	I0229 18:58:09.768345   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined IP address 192.168.39.210 and MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:09.768347   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHKeyPath
	I0229 18:58:09.768504   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHUsername
	I0229 18:58:09.768518   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHPort
	I0229 18:58:09.768673   48088 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/default-k8s-diff-port-153528/id_rsa Username:docker}
	I0229 18:58:09.768694   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHKeyPath
	I0229 18:58:09.768890   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHUsername
	I0229 18:58:09.769024   48088 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/default-k8s-diff-port-153528/id_rsa Username:docker}
	I0229 18:58:09.849055   48088 ssh_runner.go:195] Run: systemctl --version
	I0229 18:58:09.872309   48088 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0229 18:58:10.015348   48088 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 18:58:10.023333   48088 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 18:58:10.023405   48088 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 18:58:10.042264   48088 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 18:58:10.042288   48088 start.go:475] detecting cgroup driver to use...
	I0229 18:58:10.042361   48088 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 18:58:10.062390   48088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 18:58:10.080651   48088 docker.go:217] disabling cri-docker service (if available) ...
	I0229 18:58:10.080714   48088 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 18:58:10.098478   48088 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 18:58:10.115610   48088 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 18:58:10.250212   48088 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 18:58:10.402800   48088 docker.go:233] disabling docker service ...
	I0229 18:58:10.402862   48088 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 18:58:10.419793   48088 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 18:58:10.435149   48088 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 18:58:10.589671   48088 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 18:58:10.714460   48088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 18:58:10.730820   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 18:58:10.753910   48088 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0229 18:58:10.753977   48088 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:58:10.766151   48088 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0229 18:58:10.766232   48088 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:58:10.778824   48088 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:58:10.792936   48088 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:58:10.810158   48088 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 18:58:10.828150   48088 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 18:58:10.843416   48088 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 18:58:10.843488   48088 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0229 18:58:10.866488   48088 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 18:58:10.880628   48088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:58:11.031221   48088 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0229 18:58:11.199068   48088 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0229 18:58:11.199143   48088 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0229 18:58:11.204851   48088 start.go:543] Will wait 60s for crictl version
	I0229 18:58:11.204922   48088 ssh_runner.go:195] Run: which crictl
	I0229 18:58:11.209384   48088 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 18:58:11.256928   48088 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0229 18:58:11.257014   48088 ssh_runner.go:195] Run: crio --version
	I0229 18:58:11.293338   48088 ssh_runner.go:195] Run: crio --version
	I0229 18:58:11.329107   48088 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0229 18:58:09.783970   47515 main.go:141] libmachine: (no-preload-247197) Calling .Start
	I0229 18:58:09.784127   47515 main.go:141] libmachine: (no-preload-247197) Ensuring networks are active...
	I0229 18:58:09.784926   47515 main.go:141] libmachine: (no-preload-247197) Ensuring network default is active
	I0229 18:58:09.785291   47515 main.go:141] libmachine: (no-preload-247197) Ensuring network mk-no-preload-247197 is active
	I0229 18:58:09.785654   47515 main.go:141] libmachine: (no-preload-247197) Getting domain xml...
	I0229 18:58:09.786319   47515 main.go:141] libmachine: (no-preload-247197) Creating domain...
	I0229 18:58:11.102135   47515 main.go:141] libmachine: (no-preload-247197) Waiting to get IP...
	I0229 18:58:11.102911   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:11.103333   47515 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:58:11.103414   47515 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:58:11.103321   49001 retry.go:31] will retry after 205.990392ms: waiting for machine to come up
	I0229 18:58:11.310742   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:11.311298   47515 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:58:11.311327   47515 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:58:11.311247   49001 retry.go:31] will retry after 353.756736ms: waiting for machine to come up
	I0229 18:58:11.666882   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:11.667361   47515 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:58:11.667392   47515 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:58:11.667319   49001 retry.go:31] will retry after 308.284801ms: waiting for machine to come up
	I0229 18:58:11.976805   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:11.977355   47515 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:58:11.977385   47515 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:58:11.977309   49001 retry.go:31] will retry after 481.108836ms: waiting for machine to come up
	I0229 18:58:12.459764   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:12.460292   47515 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:58:12.460330   47515 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:58:12.460253   49001 retry.go:31] will retry after 549.22451ms: waiting for machine to come up
	I0229 18:58:11.330594   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetIP
	I0229 18:58:11.333628   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:11.334080   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ec:2b", ip: ""} in network mk-default-k8s-diff-port-153528: {Iface:virbr3 ExpiryTime:2024-02-29 19:58:02 +0000 UTC Type:0 Mac:52:54:00:78:ec:2b Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:default-k8s-diff-port-153528 Clientid:01:52:54:00:78:ec:2b}
	I0229 18:58:11.334112   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined IP address 192.168.39.210 and MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:11.334361   48088 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0229 18:58:11.339127   48088 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:58:11.353078   48088 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0229 18:58:11.353129   48088 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 18:58:11.392503   48088 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0229 18:58:11.392573   48088 ssh_runner.go:195] Run: which lz4
	I0229 18:58:11.398589   48088 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0229 18:58:11.405052   48088 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 18:58:11.405091   48088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0229 18:58:13.428402   48088 crio.go:444] Took 2.029836 seconds to copy over tarball
	I0229 18:58:13.428481   48088 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 18:58:10.215640   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:10.715115   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:11.215866   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:11.715307   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:12.215171   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:12.715206   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:13.215153   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:13.715048   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:14.215148   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:14.715628   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:11.084645   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:13.087354   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:13.011239   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:13.011724   47515 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:58:13.011751   47515 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:58:13.011676   49001 retry.go:31] will retry after 662.346902ms: waiting for machine to come up
	I0229 18:58:13.675622   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:13.676179   47515 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:58:13.676208   47515 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:58:13.676115   49001 retry.go:31] will retry after 761.484123ms: waiting for machine to come up
	I0229 18:58:14.439091   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:14.439599   47515 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:58:14.439626   47515 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:58:14.439546   49001 retry.go:31] will retry after 980.352556ms: waiting for machine to come up
	I0229 18:58:15.421962   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:15.422377   47515 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:58:15.422405   47515 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:58:15.422314   49001 retry.go:31] will retry after 1.134741057s: waiting for machine to come up
	I0229 18:58:16.558585   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:16.559071   47515 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:58:16.559097   47515 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:58:16.559005   49001 retry.go:31] will retry after 2.299052603s: waiting for machine to come up
	I0229 18:58:16.327243   48088 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.898733984s)
	I0229 18:58:16.327277   48088 crio.go:451] Took 2.898846 seconds to extract the tarball
	I0229 18:58:16.327289   48088 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 18:58:16.374029   48088 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 18:58:16.425625   48088 crio.go:496] all images are preloaded for cri-o runtime.
	I0229 18:58:16.425654   48088 cache_images.go:84] Images are preloaded, skipping loading
	I0229 18:58:16.425740   48088 ssh_runner.go:195] Run: crio config
	I0229 18:58:16.477353   48088 cni.go:84] Creating CNI manager for ""
	I0229 18:58:16.477382   48088 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 18:58:16.477406   48088 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 18:58:16.477447   48088 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.210 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-153528 NodeName:default-k8s-diff-port-153528 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.210"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.210 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 18:58:16.477595   48088 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.210
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-153528"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.210
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.210"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 18:58:16.477659   48088 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-153528 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.210
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-153528 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0229 18:58:16.477718   48088 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0229 18:58:16.489240   48088 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 18:58:16.489301   48088 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 18:58:16.500764   48088 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0229 18:58:16.522927   48088 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 18:58:16.543902   48088 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I0229 18:58:16.565262   48088 ssh_runner.go:195] Run: grep 192.168.39.210	control-plane.minikube.internal$ /etc/hosts
	I0229 18:58:16.571163   48088 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.210	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:58:16.585476   48088 certs.go:56] Setting up /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/default-k8s-diff-port-153528 for IP: 192.168.39.210
	I0229 18:58:16.585507   48088 certs.go:190] acquiring lock for shared ca certs: {Name:mk3505d2468da66ac20dc3ebf913782cecf1ba0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:58:16.585657   48088 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18259-6428/.minikube/ca.key
	I0229 18:58:16.585704   48088 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.key
	I0229 18:58:16.585772   48088 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/default-k8s-diff-port-153528/client.key
	I0229 18:58:16.647093   48088 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/default-k8s-diff-port-153528/apiserver.key.6213553a
	I0229 18:58:16.647194   48088 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/default-k8s-diff-port-153528/proxy-client.key
	I0229 18:58:16.647398   48088 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651.pem (1338 bytes)
	W0229 18:58:16.647463   48088 certs.go:433] ignoring /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651_empty.pem, impossibly tiny 0 bytes
	I0229 18:58:16.647476   48088 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem (1675 bytes)
	I0229 18:58:16.647501   48088 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem (1082 bytes)
	I0229 18:58:16.647527   48088 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem (1123 bytes)
	I0229 18:58:16.647553   48088 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem (1675 bytes)
	I0229 18:58:16.647591   48088 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem (1708 bytes)
	I0229 18:58:16.648235   48088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/default-k8s-diff-port-153528/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 18:58:16.678452   48088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/default-k8s-diff-port-153528/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0229 18:58:16.708360   48088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/default-k8s-diff-port-153528/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 18:58:16.740905   48088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/default-k8s-diff-port-153528/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 18:58:16.768820   48088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 18:58:16.799459   48088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0229 18:58:16.829488   48088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 18:58:16.860881   48088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0229 18:58:16.893064   48088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 18:58:16.923404   48088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651.pem --> /usr/share/ca-certificates/13651.pem (1338 bytes)
	I0229 18:58:16.952531   48088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem --> /usr/share/ca-certificates/136512.pem (1708 bytes)
	I0229 18:58:16.980895   48088 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 18:58:17.001306   48088 ssh_runner.go:195] Run: openssl version
	I0229 18:58:17.007995   48088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136512.pem && ln -fs /usr/share/ca-certificates/136512.pem /etc/ssl/certs/136512.pem"
	I0229 18:58:17.024000   48088 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136512.pem
	I0229 18:58:17.030471   48088 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 17:49 /usr/share/ca-certificates/136512.pem
	I0229 18:58:17.030544   48088 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136512.pem
	I0229 18:58:17.038306   48088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136512.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 18:58:17.050985   48088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 18:58:17.063089   48088 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:58:17.068437   48088 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 17:40 /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:58:17.068485   48088 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:58:17.075156   48088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 18:58:17.087015   48088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13651.pem && ln -fs /usr/share/ca-certificates/13651.pem /etc/ssl/certs/13651.pem"
	I0229 18:58:17.099964   48088 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13651.pem
	I0229 18:58:17.105272   48088 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 17:49 /usr/share/ca-certificates/13651.pem
	I0229 18:58:17.105333   48088 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13651.pem
	I0229 18:58:17.112447   48088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13651.pem /etc/ssl/certs/51391683.0"
	I0229 18:58:17.126499   48088 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 18:58:17.133216   48088 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0229 18:58:17.140320   48088 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0229 18:58:17.147900   48088 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0229 18:58:17.154931   48088 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0229 18:58:17.163552   48088 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0229 18:58:17.172256   48088 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0229 18:58:17.181349   48088 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-153528 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-153528 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.210 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:58:17.181481   48088 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0229 18:58:17.181554   48088 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 18:58:17.227444   48088 cri.go:89] found id: ""
	I0229 18:58:17.227532   48088 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 18:58:17.242533   48088 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0229 18:58:17.242562   48088 kubeadm.go:636] restartCluster start
	I0229 18:58:17.242616   48088 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0229 18:58:17.254713   48088 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:17.256305   48088 kubeconfig.go:92] found "default-k8s-diff-port-153528" server: "https://192.168.39.210:8444"
	I0229 18:58:17.259432   48088 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0229 18:58:17.281454   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:17.281525   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:17.295342   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:17.781719   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:17.781807   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:17.797462   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:18.281981   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:18.282082   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:18.300449   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:18.781952   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:18.782024   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:18.796641   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:15.215935   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:15.714969   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:16.215921   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:16.715200   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:17.215151   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:17.715520   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:18.215291   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:18.715662   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:19.215157   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:19.715037   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:15.585143   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:18.086077   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:18.861140   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:18.861635   47515 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:58:18.861658   47515 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:58:18.861584   49001 retry.go:31] will retry after 2.115098542s: waiting for machine to come up
	I0229 18:58:20.978165   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:20.978625   47515 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:58:20.978658   47515 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:58:20.978570   49001 retry.go:31] will retry after 3.520116791s: waiting for machine to come up
	I0229 18:58:19.282008   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:19.282093   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:19.297806   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:19.782384   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:19.782465   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:19.802496   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:20.281712   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:20.281777   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:20.298545   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:20.782139   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:20.782249   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:20.799615   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:21.282180   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:21.282288   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:21.297649   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:21.782263   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:21.782341   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:21.797537   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:22.282131   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:22.282211   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:22.303084   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:22.781558   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:22.781645   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:22.797155   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:23.281645   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:23.281727   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:23.296059   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:23.781581   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:23.781663   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:23.797132   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:20.215501   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:20.715745   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:21.214953   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:21.715762   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:22.215608   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:22.715556   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:23.215633   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:23.715012   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:24.215182   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:24.715944   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:20.585475   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:22.586962   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:25.082804   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:24.503134   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:24.503537   47515 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:58:24.503561   47515 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:58:24.503495   49001 retry.go:31] will retry after 3.056941725s: waiting for machine to come up
	I0229 18:58:27.562228   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:27.562698   47515 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:58:27.562729   47515 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:58:27.562650   49001 retry.go:31] will retry after 5.535128197s: waiting for machine to come up
	I0229 18:58:24.282207   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:24.282273   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:24.298683   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:24.781997   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:24.782088   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:24.796544   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:25.282137   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:25.282249   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:25.297916   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:25.782489   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:25.782605   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:25.800171   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:26.281679   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:26.281764   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:26.296395   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:26.781581   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:26.781700   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:26.796380   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:27.282230   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:27.282319   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:27.300719   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:27.300745   48088 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0229 18:58:27.300753   48088 kubeadm.go:1135] stopping kube-system containers ...
	I0229 18:58:27.300762   48088 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0229 18:58:27.300822   48088 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 18:58:27.344465   48088 cri.go:89] found id: ""
	I0229 18:58:27.344525   48088 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0229 18:58:27.367244   48088 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 18:58:27.379831   48088 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 18:58:27.379895   48088 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 18:58:27.390372   48088 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0229 18:58:27.390393   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:58:27.521441   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:58:28.070547   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:58:28.324425   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:58:28.416807   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:58:28.485785   48088 api_server.go:52] waiting for apiserver process to appear ...
	I0229 18:58:28.485880   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:28.986473   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:25.215272   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:25.715667   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:26.215566   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:26.715860   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:27.214993   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:27.715679   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:28.215093   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:28.715081   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:29.215188   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:29.715981   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:27.585150   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:29.585716   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:29.486136   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:29.512004   48088 api_server.go:72] duration metric: took 1.026225672s to wait for apiserver process to appear ...
	I0229 18:58:29.512036   48088 api_server.go:88] waiting for apiserver healthz status ...
	I0229 18:58:29.512081   48088 api_server.go:253] Checking apiserver healthz at https://192.168.39.210:8444/healthz ...
	I0229 18:58:29.512602   48088 api_server.go:269] stopped: https://192.168.39.210:8444/healthz: Get "https://192.168.39.210:8444/healthz": dial tcp 192.168.39.210:8444: connect: connection refused
	I0229 18:58:30.012197   48088 api_server.go:253] Checking apiserver healthz at https://192.168.39.210:8444/healthz ...
	I0229 18:58:33.076090   48088 api_server.go:279] https://192.168.39.210:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 18:58:33.076122   48088 api_server.go:103] status: https://192.168.39.210:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 18:58:33.076141   48088 api_server.go:253] Checking apiserver healthz at https://192.168.39.210:8444/healthz ...
	I0229 18:58:33.115044   48088 api_server.go:279] https://192.168.39.210:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 18:58:33.115082   48088 api_server.go:103] status: https://192.168.39.210:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 18:58:33.512305   48088 api_server.go:253] Checking apiserver healthz at https://192.168.39.210:8444/healthz ...
	I0229 18:58:33.518615   48088 api_server.go:279] https://192.168.39.210:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 18:58:33.518640   48088 api_server.go:103] status: https://192.168.39.210:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 18:58:34.012514   48088 api_server.go:253] Checking apiserver healthz at https://192.168.39.210:8444/healthz ...
	I0229 18:58:34.024771   48088 api_server.go:279] https://192.168.39.210:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 18:58:34.024809   48088 api_server.go:103] status: https://192.168.39.210:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 18:58:34.512427   48088 api_server.go:253] Checking apiserver healthz at https://192.168.39.210:8444/healthz ...
	I0229 18:58:34.519703   48088 api_server.go:279] https://192.168.39.210:8444/healthz returned 200:
	ok
	I0229 18:58:34.527814   48088 api_server.go:141] control plane version: v1.28.4
	I0229 18:58:34.527850   48088 api_server.go:131] duration metric: took 5.015799681s to wait for apiserver health ...
	I0229 18:58:34.527862   48088 cni.go:84] Creating CNI manager for ""
	I0229 18:58:34.527869   48088 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 18:58:34.529573   48088 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 18:58:30.215544   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:30.715080   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:31.215386   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:31.715180   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:32.215078   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:32.715087   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:33.215842   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:33.714950   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:34.215778   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:34.715201   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:32.084243   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:34.087247   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:33.099983   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:33.100527   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has current primary IP address 192.168.50.72 and MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:33.100548   47515 main.go:141] libmachine: (no-preload-247197) Found IP for machine: 192.168.50.72
	I0229 18:58:33.100584   47515 main.go:141] libmachine: (no-preload-247197) Reserving static IP address...
	I0229 18:58:33.100959   47515 main.go:141] libmachine: (no-preload-247197) DBG | found host DHCP lease matching {name: "no-preload-247197", mac: "52:54:00:2c:2f:53", ip: "192.168.50.72"} in network mk-no-preload-247197: {Iface:virbr2 ExpiryTime:2024-02-29 19:58:22 +0000 UTC Type:0 Mac:52:54:00:2c:2f:53 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:no-preload-247197 Clientid:01:52:54:00:2c:2f:53}
	I0229 18:58:33.100985   47515 main.go:141] libmachine: (no-preload-247197) DBG | skip adding static IP to network mk-no-preload-247197 - found existing host DHCP lease matching {name: "no-preload-247197", mac: "52:54:00:2c:2f:53", ip: "192.168.50.72"}
	I0229 18:58:33.100999   47515 main.go:141] libmachine: (no-preload-247197) Reserved static IP address: 192.168.50.72
	I0229 18:58:33.101016   47515 main.go:141] libmachine: (no-preload-247197) Waiting for SSH to be available...
	I0229 18:58:33.101057   47515 main.go:141] libmachine: (no-preload-247197) DBG | Getting to WaitForSSH function...
	I0229 18:58:33.103439   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:33.103766   47515 main.go:141] libmachine: (no-preload-247197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2f:53", ip: ""} in network mk-no-preload-247197: {Iface:virbr2 ExpiryTime:2024-02-29 19:58:22 +0000 UTC Type:0 Mac:52:54:00:2c:2f:53 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:no-preload-247197 Clientid:01:52:54:00:2c:2f:53}
	I0229 18:58:33.103817   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined IP address 192.168.50.72 and MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:33.104041   47515 main.go:141] libmachine: (no-preload-247197) DBG | Using SSH client type: external
	I0229 18:58:33.104069   47515 main.go:141] libmachine: (no-preload-247197) DBG | Using SSH private key: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/no-preload-247197/id_rsa (-rw-------)
	I0229 18:58:33.104110   47515 main.go:141] libmachine: (no-preload-247197) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.72 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18259-6428/.minikube/machines/no-preload-247197/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 18:58:33.104131   47515 main.go:141] libmachine: (no-preload-247197) DBG | About to run SSH command:
	I0229 18:58:33.104145   47515 main.go:141] libmachine: (no-preload-247197) DBG | exit 0
	I0229 18:58:33.240401   47515 main.go:141] libmachine: (no-preload-247197) DBG | SSH cmd err, output: <nil>: 
	I0229 18:58:33.240811   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetConfigRaw
	I0229 18:58:33.241500   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetIP
	I0229 18:58:33.244578   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:33.244970   47515 main.go:141] libmachine: (no-preload-247197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2f:53", ip: ""} in network mk-no-preload-247197: {Iface:virbr2 ExpiryTime:2024-02-29 19:58:22 +0000 UTC Type:0 Mac:52:54:00:2c:2f:53 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:no-preload-247197 Clientid:01:52:54:00:2c:2f:53}
	I0229 18:58:33.245002   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined IP address 192.168.50.72 and MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:33.245358   47515 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/no-preload-247197/config.json ...
	I0229 18:58:33.245522   47515 machine.go:88] provisioning docker machine ...
	I0229 18:58:33.245542   47515 main.go:141] libmachine: (no-preload-247197) Calling .DriverName
	I0229 18:58:33.245755   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetMachineName
	I0229 18:58:33.245935   47515 buildroot.go:166] provisioning hostname "no-preload-247197"
	I0229 18:58:33.245977   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetMachineName
	I0229 18:58:33.246175   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHHostname
	I0229 18:58:33.248841   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:33.249263   47515 main.go:141] libmachine: (no-preload-247197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2f:53", ip: ""} in network mk-no-preload-247197: {Iface:virbr2 ExpiryTime:2024-02-29 19:58:22 +0000 UTC Type:0 Mac:52:54:00:2c:2f:53 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:no-preload-247197 Clientid:01:52:54:00:2c:2f:53}
	I0229 18:58:33.249284   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined IP address 192.168.50.72 and MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:33.249458   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHPort
	I0229 18:58:33.249629   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHKeyPath
	I0229 18:58:33.249767   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHKeyPath
	I0229 18:58:33.249946   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHUsername
	I0229 18:58:33.250120   47515 main.go:141] libmachine: Using SSH client type: native
	I0229 18:58:33.250335   47515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.72 22 <nil> <nil>}
	I0229 18:58:33.250351   47515 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-247197 && echo "no-preload-247197" | sudo tee /etc/hostname
	I0229 18:58:33.386175   47515 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-247197
	
	I0229 18:58:33.386210   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHHostname
	I0229 18:58:33.389491   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:33.389909   47515 main.go:141] libmachine: (no-preload-247197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2f:53", ip: ""} in network mk-no-preload-247197: {Iface:virbr2 ExpiryTime:2024-02-29 19:58:22 +0000 UTC Type:0 Mac:52:54:00:2c:2f:53 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:no-preload-247197 Clientid:01:52:54:00:2c:2f:53}
	I0229 18:58:33.389950   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined IP address 192.168.50.72 and MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:33.390080   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHPort
	I0229 18:58:33.390288   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHKeyPath
	I0229 18:58:33.390495   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHKeyPath
	I0229 18:58:33.390678   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHUsername
	I0229 18:58:33.390844   47515 main.go:141] libmachine: Using SSH client type: native
	I0229 18:58:33.391058   47515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.72 22 <nil> <nil>}
	I0229 18:58:33.391090   47515 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-247197' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-247197/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-247197' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 18:58:33.510209   47515 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 18:58:33.510243   47515 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18259-6428/.minikube CaCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18259-6428/.minikube}
	I0229 18:58:33.510263   47515 buildroot.go:174] setting up certificates
	I0229 18:58:33.510273   47515 provision.go:83] configureAuth start
	I0229 18:58:33.510281   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetMachineName
	I0229 18:58:33.510582   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetIP
	I0229 18:58:33.513357   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:33.513741   47515 main.go:141] libmachine: (no-preload-247197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2f:53", ip: ""} in network mk-no-preload-247197: {Iface:virbr2 ExpiryTime:2024-02-29 19:58:22 +0000 UTC Type:0 Mac:52:54:00:2c:2f:53 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:no-preload-247197 Clientid:01:52:54:00:2c:2f:53}
	I0229 18:58:33.513769   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined IP address 192.168.50.72 and MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:33.513936   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHHostname
	I0229 18:58:33.516227   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:33.516513   47515 main.go:141] libmachine: (no-preload-247197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2f:53", ip: ""} in network mk-no-preload-247197: {Iface:virbr2 ExpiryTime:2024-02-29 19:58:22 +0000 UTC Type:0 Mac:52:54:00:2c:2f:53 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:no-preload-247197 Clientid:01:52:54:00:2c:2f:53}
	I0229 18:58:33.516543   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined IP address 192.168.50.72 and MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:33.516700   47515 provision.go:138] copyHostCerts
	I0229 18:58:33.516746   47515 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem, removing ...
	I0229 18:58:33.516761   47515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem
	I0229 18:58:33.516824   47515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem (1123 bytes)
	I0229 18:58:33.516931   47515 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem, removing ...
	I0229 18:58:33.516952   47515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem
	I0229 18:58:33.516987   47515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem (1675 bytes)
	I0229 18:58:33.517066   47515 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem, removing ...
	I0229 18:58:33.517077   47515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem
	I0229 18:58:33.517106   47515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem (1082 bytes)
	I0229 18:58:33.517181   47515 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem org=jenkins.no-preload-247197 san=[192.168.50.72 192.168.50.72 localhost 127.0.0.1 minikube no-preload-247197]
	I0229 18:58:33.651858   47515 provision.go:172] copyRemoteCerts
	I0229 18:58:33.651914   47515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 18:58:33.651936   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHHostname
	I0229 18:58:33.655072   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:33.655551   47515 main.go:141] libmachine: (no-preload-247197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2f:53", ip: ""} in network mk-no-preload-247197: {Iface:virbr2 ExpiryTime:2024-02-29 19:58:22 +0000 UTC Type:0 Mac:52:54:00:2c:2f:53 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:no-preload-247197 Clientid:01:52:54:00:2c:2f:53}
	I0229 18:58:33.655584   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined IP address 192.168.50.72 and MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:33.655776   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHPort
	I0229 18:58:33.655952   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHKeyPath
	I0229 18:58:33.656103   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHUsername
	I0229 18:58:33.656277   47515 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/no-preload-247197/id_rsa Username:docker}
	I0229 18:58:33.747197   47515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0229 18:58:33.776690   47515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0229 18:58:33.804404   47515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 18:58:33.831068   47515 provision.go:86] duration metric: configureAuth took 320.782451ms
	I0229 18:58:33.831105   47515 buildroot.go:189] setting minikube options for container-runtime
	I0229 18:58:33.831336   47515 config.go:182] Loaded profile config "no-preload-247197": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0229 18:58:33.831469   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHHostname
	I0229 18:58:33.834209   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:33.834617   47515 main.go:141] libmachine: (no-preload-247197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2f:53", ip: ""} in network mk-no-preload-247197: {Iface:virbr2 ExpiryTime:2024-02-29 19:58:22 +0000 UTC Type:0 Mac:52:54:00:2c:2f:53 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:no-preload-247197 Clientid:01:52:54:00:2c:2f:53}
	I0229 18:58:33.834650   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined IP address 192.168.50.72 and MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:33.834845   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHPort
	I0229 18:58:33.835046   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHKeyPath
	I0229 18:58:33.835215   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHKeyPath
	I0229 18:58:33.835343   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHUsername
	I0229 18:58:33.835503   47515 main.go:141] libmachine: Using SSH client type: native
	I0229 18:58:33.835694   47515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.72 22 <nil> <nil>}
	I0229 18:58:33.835717   47515 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0229 18:58:34.141350   47515 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0229 18:58:34.141372   47515 machine.go:91] provisioned docker machine in 895.837431ms
	I0229 18:58:34.141385   47515 start.go:300] post-start starting for "no-preload-247197" (driver="kvm2")
	I0229 18:58:34.141399   47515 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 18:58:34.141422   47515 main.go:141] libmachine: (no-preload-247197) Calling .DriverName
	I0229 18:58:34.141763   47515 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 18:58:34.141800   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHHostname
	I0229 18:58:34.144673   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:34.145078   47515 main.go:141] libmachine: (no-preload-247197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2f:53", ip: ""} in network mk-no-preload-247197: {Iface:virbr2 ExpiryTime:2024-02-29 19:58:22 +0000 UTC Type:0 Mac:52:54:00:2c:2f:53 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:no-preload-247197 Clientid:01:52:54:00:2c:2f:53}
	I0229 18:58:34.145106   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined IP address 192.168.50.72 and MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:34.145225   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHPort
	I0229 18:58:34.145387   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHKeyPath
	I0229 18:58:34.145509   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHUsername
	I0229 18:58:34.145618   47515 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/no-preload-247197/id_rsa Username:docker}
	I0229 18:58:34.241817   47515 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 18:58:34.247096   47515 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 18:58:34.247125   47515 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6428/.minikube/addons for local assets ...
	I0229 18:58:34.247200   47515 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6428/.minikube/files for local assets ...
	I0229 18:58:34.247294   47515 filesync.go:149] local asset: /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem -> 136512.pem in /etc/ssl/certs
	I0229 18:58:34.247386   47515 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 18:58:34.261959   47515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem --> /etc/ssl/certs/136512.pem (1708 bytes)
	I0229 18:58:34.293974   47515 start.go:303] post-start completed in 152.574202ms
	I0229 18:58:34.294000   47515 fix.go:56] fixHost completed within 24.533673806s
	I0229 18:58:34.294031   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHHostname
	I0229 18:58:34.297066   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:34.297455   47515 main.go:141] libmachine: (no-preload-247197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2f:53", ip: ""} in network mk-no-preload-247197: {Iface:virbr2 ExpiryTime:2024-02-29 19:58:22 +0000 UTC Type:0 Mac:52:54:00:2c:2f:53 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:no-preload-247197 Clientid:01:52:54:00:2c:2f:53}
	I0229 18:58:34.297480   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined IP address 192.168.50.72 and MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:34.297685   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHPort
	I0229 18:58:34.297865   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHKeyPath
	I0229 18:58:34.298064   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHKeyPath
	I0229 18:58:34.298256   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHUsername
	I0229 18:58:34.298448   47515 main.go:141] libmachine: Using SSH client type: native
	I0229 18:58:34.298671   47515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.72 22 <nil> <nil>}
	I0229 18:58:34.298687   47515 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 18:58:34.416701   47515 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709233114.391433365
	
	I0229 18:58:34.416724   47515 fix.go:206] guest clock: 1709233114.391433365
	I0229 18:58:34.416733   47515 fix.go:219] Guest: 2024-02-29 18:58:34.391433365 +0000 UTC Remote: 2024-02-29 18:58:34.294005249 +0000 UTC m=+366.458860154 (delta=97.428116ms)
	I0229 18:58:34.416763   47515 fix.go:190] guest clock delta is within tolerance: 97.428116ms
	I0229 18:58:34.416770   47515 start.go:83] releasing machines lock for "no-preload-247197", held for 24.656479144s
	I0229 18:58:34.416795   47515 main.go:141] libmachine: (no-preload-247197) Calling .DriverName
	I0229 18:58:34.417031   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetIP
	I0229 18:58:34.419713   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:34.420098   47515 main.go:141] libmachine: (no-preload-247197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2f:53", ip: ""} in network mk-no-preload-247197: {Iface:virbr2 ExpiryTime:2024-02-29 19:58:22 +0000 UTC Type:0 Mac:52:54:00:2c:2f:53 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:no-preload-247197 Clientid:01:52:54:00:2c:2f:53}
	I0229 18:58:34.420129   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined IP address 192.168.50.72 and MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:34.420288   47515 main.go:141] libmachine: (no-preload-247197) Calling .DriverName
	I0229 18:58:34.420789   47515 main.go:141] libmachine: (no-preload-247197) Calling .DriverName
	I0229 18:58:34.420989   47515 main.go:141] libmachine: (no-preload-247197) Calling .DriverName
	I0229 18:58:34.421076   47515 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 18:58:34.421125   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHHostname
	I0229 18:58:34.421239   47515 ssh_runner.go:195] Run: cat /version.json
	I0229 18:58:34.421268   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHHostname
	I0229 18:58:34.424047   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:34.424359   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:34.424399   47515 main.go:141] libmachine: (no-preload-247197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2f:53", ip: ""} in network mk-no-preload-247197: {Iface:virbr2 ExpiryTime:2024-02-29 19:58:22 +0000 UTC Type:0 Mac:52:54:00:2c:2f:53 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:no-preload-247197 Clientid:01:52:54:00:2c:2f:53}
	I0229 18:58:34.424418   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined IP address 192.168.50.72 and MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:34.424564   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHPort
	I0229 18:58:34.424731   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHKeyPath
	I0229 18:58:34.424803   47515 main.go:141] libmachine: (no-preload-247197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2f:53", ip: ""} in network mk-no-preload-247197: {Iface:virbr2 ExpiryTime:2024-02-29 19:58:22 +0000 UTC Type:0 Mac:52:54:00:2c:2f:53 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:no-preload-247197 Clientid:01:52:54:00:2c:2f:53}
	I0229 18:58:34.424829   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined IP address 192.168.50.72 and MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:34.424969   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHUsername
	I0229 18:58:34.425124   47515 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/no-preload-247197/id_rsa Username:docker}
	I0229 18:58:34.425217   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHPort
	I0229 18:58:34.425348   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHKeyPath
	I0229 18:58:34.425506   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHUsername
	I0229 18:58:34.425705   47515 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/no-preload-247197/id_rsa Username:docker}
	I0229 18:58:34.505253   47515 ssh_runner.go:195] Run: systemctl --version
	I0229 18:58:34.533780   47515 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0229 18:58:34.696609   47515 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 18:58:34.703768   47515 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 18:58:34.703848   47515 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 18:58:34.723243   47515 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 18:58:34.723271   47515 start.go:475] detecting cgroup driver to use...
	I0229 18:58:34.723342   47515 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 18:58:34.743696   47515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 18:58:34.760022   47515 docker.go:217] disabling cri-docker service (if available) ...
	I0229 18:58:34.760085   47515 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 18:58:34.775217   47515 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 18:58:34.791576   47515 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 18:58:34.920544   47515 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 18:58:35.093684   47515 docker.go:233] disabling docker service ...
	I0229 18:58:35.093760   47515 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 18:58:35.112349   47515 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 18:58:35.128145   47515 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 18:58:35.246120   47515 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 18:58:35.363110   47515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 18:58:35.378087   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 18:58:35.399610   47515 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0229 18:58:35.399658   47515 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:58:35.410579   47515 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0229 18:58:35.410624   47515 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:58:35.421664   47515 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:58:35.432726   47515 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:58:35.443728   47515 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 18:58:35.455072   47515 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 18:58:35.467211   47515 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 18:58:35.467263   47515 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0229 18:58:35.480669   47515 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 18:58:35.491649   47515 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:58:35.621272   47515 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0229 18:58:35.793148   47515 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0229 18:58:35.793225   47515 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0229 18:58:35.798495   47515 start.go:543] Will wait 60s for crictl version
	I0229 18:58:35.798556   47515 ssh_runner.go:195] Run: which crictl
	I0229 18:58:35.803756   47515 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 18:58:35.848168   47515 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0229 18:58:35.848259   47515 ssh_runner.go:195] Run: crio --version
	I0229 18:58:35.879346   47515 ssh_runner.go:195] Run: crio --version
	I0229 18:58:35.911939   47515 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0229 18:58:35.913174   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetIP
	I0229 18:58:35.915761   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:35.916134   47515 main.go:141] libmachine: (no-preload-247197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2f:53", ip: ""} in network mk-no-preload-247197: {Iface:virbr2 ExpiryTime:2024-02-29 19:58:22 +0000 UTC Type:0 Mac:52:54:00:2c:2f:53 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:no-preload-247197 Clientid:01:52:54:00:2c:2f:53}
	I0229 18:58:35.916162   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined IP address 192.168.50.72 and MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:35.916350   47515 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0229 18:58:35.921206   47515 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:58:35.936342   47515 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0229 18:58:35.936375   47515 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 18:58:35.974456   47515 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0229 18:58:35.974475   47515 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0229 18:58:35.974509   47515 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:58:35.974546   47515 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0229 18:58:35.974567   47515 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0229 18:58:35.974613   47515 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0229 18:58:35.974668   47515 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0229 18:58:35.974733   47515 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0229 18:58:35.974780   47515 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0229 18:58:35.975073   47515 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0229 18:58:35.975958   47515 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:58:35.975981   47515 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0229 18:58:35.975993   47515 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0229 18:58:35.976002   47515 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0229 18:58:35.976027   47515 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0229 18:58:35.975963   47515 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0229 18:58:35.975959   47515 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0229 18:58:35.976249   47515 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0229 18:58:36.111205   47515 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0229 18:58:36.124071   47515 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0229 18:58:36.150002   47515 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0229 18:58:36.196158   47515 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0229 18:58:36.258361   47515 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0229 18:58:36.273898   47515 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0229 18:58:36.283390   47515 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0229 18:58:36.336487   47515 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0229 18:58:36.336531   47515 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0229 18:58:36.336541   47515 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0229 18:58:36.336577   47515 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0229 18:58:36.336590   47515 ssh_runner.go:195] Run: which crictl
	I0229 18:58:36.336620   47515 ssh_runner.go:195] Run: which crictl
	I0229 18:58:36.336636   47515 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0229 18:58:36.336661   47515 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0229 18:58:36.336670   47515 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0229 18:58:36.336695   47515 ssh_runner.go:195] Run: which crictl
	I0229 18:58:36.336697   47515 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0229 18:58:36.336723   47515 ssh_runner.go:195] Run: which crictl
	I0229 18:58:36.383302   47515 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0229 18:58:36.383347   47515 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0229 18:58:36.383402   47515 ssh_runner.go:195] Run: which crictl
	I0229 18:58:36.393420   47515 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0229 18:58:36.393444   47515 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0229 18:58:36.393459   47515 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0229 18:58:36.393495   47515 ssh_runner.go:195] Run: which crictl
	I0229 18:58:36.393527   47515 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0229 18:58:36.393579   47515 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0229 18:58:36.393612   47515 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0229 18:58:36.393665   47515 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0229 18:58:36.503611   47515 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0229 18:58:36.503707   47515 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0229 18:58:36.508306   47515 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0229 18:58:36.508403   47515 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0229 18:58:36.511536   47515 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0229 18:58:36.511600   47515 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0229 18:58:36.511636   47515 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0229 18:58:36.511706   47515 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0229 18:58:36.511721   47515 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0229 18:58:36.511749   47515 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0229 18:58:36.511781   47515 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0229 18:58:36.522392   47515 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0229 18:58:36.522413   47515 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0229 18:58:36.522458   47515 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0229 18:58:36.522645   47515 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0229 18:58:36.523319   47515 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0229 18:58:36.529871   47515 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0229 18:58:36.576922   47515 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0229 18:58:36.576994   47515 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0229 18:58:36.577093   47515 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0229 18:58:36.892014   47515 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:58:34.530886   48088 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 18:58:34.547233   48088 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 18:58:34.572237   48088 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 18:58:34.586775   48088 system_pods.go:59] 8 kube-system pods found
	I0229 18:58:34.586816   48088 system_pods.go:61] "coredns-5dd5756b68-tr4nn" [016aff45-17c3-4278-a7f3-1e0a5770f1d4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 18:58:34.586827   48088 system_pods.go:61] "etcd-default-k8s-diff-port-153528" [829f38ad-e4e4-434d-8da6-dde64deeb1ec] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0229 18:58:34.586837   48088 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-153528" [e27986e6-58a2-4acc-8a41-d4662ce0848d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0229 18:58:34.586853   48088 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-153528" [fb77dff9-141e-495f-9be8-f570f9387bf3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0229 18:58:34.586868   48088 system_pods.go:61] "kube-proxy-fwqsv" [af8cd0ff-71dd-44d4-8918-496e27654cbf] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0229 18:58:34.586887   48088 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-153528" [a325ec8e-4613-4447-87b1-c23b5b614352] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0229 18:58:34.586898   48088 system_pods.go:61] "metrics-server-57f55c9bc5-226bj" [80d7a4c6-e9b5-4324-8c61-489a874a9e79] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 18:58:34.586910   48088 system_pods.go:61] "storage-provisioner" [4270d9ce-329f-4531-9563-65a398f8b168] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0229 18:58:34.586919   48088 system_pods.go:74] duration metric: took 14.657543ms to wait for pod list to return data ...
	I0229 18:58:34.586932   48088 node_conditions.go:102] verifying NodePressure condition ...
	I0229 18:58:34.595109   48088 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 18:58:34.595144   48088 node_conditions.go:123] node cpu capacity is 2
	I0229 18:58:34.595158   48088 node_conditions.go:105] duration metric: took 8.219984ms to run NodePressure ...
	I0229 18:58:34.595179   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:58:34.946493   48088 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0229 18:58:34.951066   48088 kubeadm.go:787] kubelet initialised
	I0229 18:58:34.951088   48088 kubeadm.go:788] duration metric: took 4.569338ms waiting for restarted kubelet to initialise ...
	I0229 18:58:34.951098   48088 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 18:58:34.956637   48088 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-tr4nn" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:36.967075   48088 pod_ready.go:102] pod "coredns-5dd5756b68-tr4nn" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:35.215815   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:35.715203   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:36.215521   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:36.715525   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:37.215610   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:37.715474   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:38.215208   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:38.714993   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:39.215128   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:39.715944   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:36.584041   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:38.584897   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:38.722817   47515 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.20033311s)
	I0229 18:58:38.722904   47515 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0229 18:58:38.722923   47515 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.830873001s)
	I0229 18:58:38.722981   47515 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0229 18:58:38.723016   47515 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:58:38.722938   47515 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0229 18:58:38.723083   47515 ssh_runner.go:195] Run: which crictl
	I0229 18:58:38.723104   47515 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0229 18:58:38.722872   47515 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0: (2.145756086s)
	I0229 18:58:38.723163   47515 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0229 18:58:38.728297   47515 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:58:42.131683   47515 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.403360461s)
	I0229 18:58:42.131729   47515 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0229 18:58:42.131819   47515 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (3.408694108s)
	I0229 18:58:42.131839   47515 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0229 18:58:42.131823   47515 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0229 18:58:42.131862   47515 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0229 18:58:42.131903   47515 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0229 18:58:39.465588   48088 pod_ready.go:102] pod "coredns-5dd5756b68-tr4nn" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:41.473698   48088 pod_ready.go:102] pod "coredns-5dd5756b68-tr4nn" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:42.965252   48088 pod_ready.go:92] pod "coredns-5dd5756b68-tr4nn" in "kube-system" namespace has status "Ready":"True"
	I0229 18:58:42.965281   48088 pod_ready.go:81] duration metric: took 8.008622438s waiting for pod "coredns-5dd5756b68-tr4nn" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:42.965293   48088 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-153528" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:42.977865   48088 pod_ready.go:92] pod "etcd-default-k8s-diff-port-153528" in "kube-system" namespace has status "Ready":"True"
	I0229 18:58:42.977888   48088 pod_ready.go:81] duration metric: took 12.586144ms waiting for pod "etcd-default-k8s-diff-port-153528" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:42.977900   48088 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-153528" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:43.486518   48088 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-153528" in "kube-system" namespace has status "Ready":"True"
	I0229 18:58:43.486545   48088 pod_ready.go:81] duration metric: took 508.631346ms waiting for pod "kube-apiserver-default-k8s-diff-port-153528" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:43.486554   48088 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-153528" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:40.215679   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:40.715898   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:41.215271   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:41.715702   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:42.214943   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:42.715085   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:43.215196   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:43.715164   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:44.215580   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:44.715155   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:41.082209   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:43.089104   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:45.101973   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:43.991872   47515 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.859995098s)
	I0229 18:58:43.991921   47515 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0229 18:58:43.992104   47515 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.860178579s)
	I0229 18:58:43.992159   47515 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0229 18:58:43.992190   47515 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0229 18:58:43.992238   47515 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0229 18:58:45.454368   47515 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.462102352s)
	I0229 18:58:45.454407   47515 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0229 18:58:45.454436   47515 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0229 18:58:45.454567   47515 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0229 18:58:45.493014   48088 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-153528" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:46.493937   48088 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-153528" in "kube-system" namespace has status "Ready":"True"
	I0229 18:58:46.493969   48088 pod_ready.go:81] duration metric: took 3.007406763s waiting for pod "kube-controller-manager-default-k8s-diff-port-153528" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:46.493982   48088 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-fwqsv" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:46.499157   48088 pod_ready.go:92] pod "kube-proxy-fwqsv" in "kube-system" namespace has status "Ready":"True"
	I0229 18:58:46.499177   48088 pod_ready.go:81] duration metric: took 5.187224ms waiting for pod "kube-proxy-fwqsv" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:46.499188   48088 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-153528" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:48.006573   48088 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-153528" in "kube-system" namespace has status "Ready":"True"
	I0229 18:58:48.006600   48088 pod_ready.go:81] duration metric: took 1.507402889s waiting for pod "kube-scheduler-default-k8s-diff-port-153528" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:48.006612   48088 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:45.215722   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:45.715879   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:46.215457   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:46.715123   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:47.216000   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:47.715056   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:48.215140   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:48.715448   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:49.215722   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:49.715058   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:47.586794   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:50.084118   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:48.118942   47515 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.664337971s)
	I0229 18:58:48.118983   47515 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0229 18:58:48.119010   47515 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0229 18:58:48.119086   47515 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0229 18:58:52.117429   47515 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.998319742s)
	I0229 18:58:52.117462   47515 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0229 18:58:52.117488   47515 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0229 18:58:52.117538   47515 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0229 18:58:50.015404   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:52.515203   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:50.214969   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:50.715535   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:51.215238   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:51.715704   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:52.215238   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:52.715897   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:53.215106   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:53.715753   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:54.215737   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:54.715449   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:52.084871   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:54.582435   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:53.079184   47515 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0229 18:58:53.079224   47515 cache_images.go:123] Successfully loaded all cached images
	I0229 18:58:53.079231   47515 cache_images.go:92] LoadImages completed in 17.104746432s
	I0229 18:58:53.079303   47515 ssh_runner.go:195] Run: crio config
	I0229 18:58:53.126378   47515 cni.go:84] Creating CNI manager for ""
	I0229 18:58:53.126400   47515 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 18:58:53.126417   47515 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 18:58:53.126434   47515 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.72 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-247197 NodeName:no-preload-247197 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.72"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.72 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 18:58:53.126583   47515 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.72
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-247197"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.72
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.72"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 18:58:53.126643   47515 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-247197 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.72
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-247197 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 18:58:53.126692   47515 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0229 18:58:53.141044   47515 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 18:58:53.141117   47515 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 18:58:53.153167   47515 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (381 bytes)
	I0229 18:58:53.173724   47515 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0229 18:58:53.192645   47515 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0229 18:58:53.212004   47515 ssh_runner.go:195] Run: grep 192.168.50.72	control-plane.minikube.internal$ /etc/hosts
	I0229 18:58:53.216443   47515 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.72	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:58:53.233319   47515 certs.go:56] Setting up /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/no-preload-247197 for IP: 192.168.50.72
	I0229 18:58:53.233353   47515 certs.go:190] acquiring lock for shared ca certs: {Name:mk3505d2468da66ac20dc3ebf913782cecf1ba0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:58:53.233510   47515 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18259-6428/.minikube/ca.key
	I0229 18:58:53.233568   47515 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.key
	I0229 18:58:53.233680   47515 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/no-preload-247197/client.key
	I0229 18:58:53.233763   47515 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/no-preload-247197/apiserver.key.7c8fc674
	I0229 18:58:53.233803   47515 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/no-preload-247197/proxy-client.key
	I0229 18:58:53.233915   47515 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651.pem (1338 bytes)
	W0229 18:58:53.233942   47515 certs.go:433] ignoring /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651_empty.pem, impossibly tiny 0 bytes
	I0229 18:58:53.233948   47515 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem (1675 bytes)
	I0229 18:58:53.233971   47515 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem (1082 bytes)
	I0229 18:58:53.233991   47515 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem (1123 bytes)
	I0229 18:58:53.234011   47515 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem (1675 bytes)
	I0229 18:58:53.234050   47515 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem (1708 bytes)
	I0229 18:58:53.234710   47515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/no-preload-247197/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 18:58:53.264093   47515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/no-preload-247197/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0229 18:58:53.290793   47515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/no-preload-247197/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 18:58:53.319206   47515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/no-preload-247197/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 18:58:53.346074   47515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 18:58:53.373754   47515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0229 18:58:53.402222   47515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 18:58:53.430685   47515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0229 18:58:53.458589   47515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651.pem --> /usr/share/ca-certificates/13651.pem (1338 bytes)
	I0229 18:58:53.485553   47515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem --> /usr/share/ca-certificates/136512.pem (1708 bytes)
	I0229 18:58:53.513594   47515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 18:58:53.542588   47515 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 18:58:53.562935   47515 ssh_runner.go:195] Run: openssl version
	I0229 18:58:53.571313   47515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13651.pem && ln -fs /usr/share/ca-certificates/13651.pem /etc/ssl/certs/13651.pem"
	I0229 18:58:53.586708   47515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13651.pem
	I0229 18:58:53.592585   47515 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 17:49 /usr/share/ca-certificates/13651.pem
	I0229 18:58:53.592682   47515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13651.pem
	I0229 18:58:53.600135   47515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13651.pem /etc/ssl/certs/51391683.0"
	I0229 18:58:53.614410   47515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136512.pem && ln -fs /usr/share/ca-certificates/136512.pem /etc/ssl/certs/136512.pem"
	I0229 18:58:53.627733   47515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136512.pem
	I0229 18:58:53.632869   47515 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 17:49 /usr/share/ca-certificates/136512.pem
	I0229 18:58:53.632926   47515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136512.pem
	I0229 18:58:53.639973   47515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136512.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 18:58:53.654090   47515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 18:58:53.667714   47515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:58:53.672987   47515 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 17:40 /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:58:53.673046   47515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:58:53.679806   47515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 18:58:53.692846   47515 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 18:58:53.697764   47515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0229 18:58:53.704678   47515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0229 18:58:53.711070   47515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0229 18:58:53.717607   47515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0229 18:58:53.724048   47515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0229 18:58:53.731138   47515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0229 18:58:53.737875   47515 kubeadm.go:404] StartCluster: {Name:no-preload-247197 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-247197 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.72 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:58:53.737981   47515 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0229 18:58:53.738028   47515 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 18:58:53.777952   47515 cri.go:89] found id: ""
	I0229 18:58:53.778016   47515 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 18:58:53.790323   47515 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0229 18:58:53.790342   47515 kubeadm.go:636] restartCluster start
	I0229 18:58:53.790397   47515 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0229 18:58:53.801812   47515 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:53.803203   47515 kubeconfig.go:92] found "no-preload-247197" server: "https://192.168.50.72:8443"
	I0229 18:58:53.806252   47515 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0229 18:58:53.817542   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:58:53.817601   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:53.831702   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:54.318196   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:58:54.318261   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:54.332586   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:54.818521   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:58:54.818617   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:54.835279   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:55.317681   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:58:55.317760   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:55.334156   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:55.818654   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:58:55.818761   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:55.834435   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:56.317800   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:58:56.317923   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:56.333149   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:56.817667   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:58:56.817776   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:56.832497   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:57.318058   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:58:57.318173   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:57.332672   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:57.818372   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:58:57.818477   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:57.834669   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:55.015453   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:57.513580   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:55.215634   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:55.715221   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:56.215582   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:56.715580   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:57.215652   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:57.715281   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:58.215656   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:58.715759   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:59.216000   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:59.714984   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:56.583205   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:59.083761   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:58.318525   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:58:58.318595   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:58.334704   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:58.818249   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:58:58.818360   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:58.834221   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:59.318385   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:58:59.318489   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:59.334283   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:59.818167   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:58:59.818231   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:59.834310   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:59:00.317793   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:59:00.317904   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:59:00.334063   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:59:00.817623   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:59:00.817702   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:59:00.832855   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:59:01.318481   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:59:01.318569   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:59:01.333716   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:59:01.818312   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:59:01.818413   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:59:01.834094   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:59:02.317571   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:59:02.317680   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:59:02.332422   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:59:02.817947   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:59:02.818044   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:59:02.834339   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:59.514446   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:02.015881   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:00.215747   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:00.715123   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:01.214978   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:01.715726   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:02.215092   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:02.715148   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:03.215149   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:03.715717   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:04.215830   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:04.715275   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:01.084277   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:03.583278   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:03.318317   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:59:03.318410   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:59:03.334824   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:59:03.818569   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:59:03.818652   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:59:03.834206   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:59:03.834235   47515 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0229 18:59:03.834244   47515 kubeadm.go:1135] stopping kube-system containers ...
	I0229 18:59:03.834255   47515 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0229 18:59:03.834306   47515 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 18:59:03.877464   47515 cri.go:89] found id: ""
	I0229 18:59:03.877543   47515 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0229 18:59:03.901093   47515 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 18:59:03.912185   47515 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 18:59:03.912237   47515 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 18:59:03.923685   47515 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0229 18:59:03.923706   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:59:04.037753   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:59:05.127681   47515 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.089896164s)
	I0229 18:59:05.127710   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:59:05.363326   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:59:05.447053   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:59:05.525183   47515 api_server.go:52] waiting for apiserver process to appear ...
	I0229 18:59:05.525276   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:06.026071   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:06.525747   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:07.026103   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:07.043681   47515 api_server.go:72] duration metric: took 1.518498943s to wait for apiserver process to appear ...
	I0229 18:59:07.043706   47515 api_server.go:88] waiting for apiserver healthz status ...
	I0229 18:59:07.043728   47515 api_server.go:253] Checking apiserver healthz at https://192.168.50.72:8443/healthz ...
	I0229 18:59:04.518296   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:07.014672   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:05.215563   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:05.715180   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:06.215014   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:06.715750   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:07.215911   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:07.715662   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:08.215895   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:08.715565   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:09.214999   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:09.215096   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:09.270645   47919 cri.go:89] found id: ""
	I0229 18:59:09.270672   47919 logs.go:276] 0 containers: []
	W0229 18:59:09.270683   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:09.270690   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:09.270748   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:09.335492   47919 cri.go:89] found id: ""
	I0229 18:59:09.335519   47919 logs.go:276] 0 containers: []
	W0229 18:59:09.335530   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:09.335546   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:09.335627   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:09.405117   47919 cri.go:89] found id: ""
	I0229 18:59:09.405150   47919 logs.go:276] 0 containers: []
	W0229 18:59:09.405160   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:09.405167   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:09.405233   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:09.451096   47919 cri.go:89] found id: ""
	I0229 18:59:09.451128   47919 logs.go:276] 0 containers: []
	W0229 18:59:09.451140   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:09.451147   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:09.451226   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:09.498951   47919 cri.go:89] found id: ""
	I0229 18:59:09.498981   47919 logs.go:276] 0 containers: []
	W0229 18:59:09.499007   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:09.499014   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:09.499091   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:09.544447   47919 cri.go:89] found id: ""
	I0229 18:59:09.544474   47919 logs.go:276] 0 containers: []
	W0229 18:59:09.544484   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:09.544491   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:09.544548   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:09.597735   47919 cri.go:89] found id: ""
	I0229 18:59:09.597764   47919 logs.go:276] 0 containers: []
	W0229 18:59:09.597775   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:09.597782   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:09.597836   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:09.648458   47919 cri.go:89] found id: ""
	I0229 18:59:09.648480   47919 logs.go:276] 0 containers: []
	W0229 18:59:09.648489   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:09.648499   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:09.648515   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:09.700744   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:09.700792   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:09.717303   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:09.717332   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:09.845966   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:09.845984   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:09.845995   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:09.913069   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:09.913106   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:05.583650   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:07.584155   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:09.584605   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:09.527960   47515 api_server.go:279] https://192.168.50.72:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 18:59:09.528037   47515 api_server.go:103] status: https://192.168.50.72:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 18:59:09.528059   47515 api_server.go:253] Checking apiserver healthz at https://192.168.50.72:8443/healthz ...
	I0229 18:59:09.571679   47515 api_server.go:279] https://192.168.50.72:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 18:59:09.571713   47515 api_server.go:103] status: https://192.168.50.72:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 18:59:09.571738   47515 api_server.go:253] Checking apiserver healthz at https://192.168.50.72:8443/healthz ...
	I0229 18:59:09.647733   47515 api_server.go:279] https://192.168.50.72:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 18:59:09.647780   47515 api_server.go:103] status: https://192.168.50.72:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 18:59:10.044646   47515 api_server.go:253] Checking apiserver healthz at https://192.168.50.72:8443/healthz ...
	I0229 18:59:10.049310   47515 api_server.go:279] https://192.168.50.72:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 18:59:10.049347   47515 api_server.go:103] status: https://192.168.50.72:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 18:59:10.543904   47515 api_server.go:253] Checking apiserver healthz at https://192.168.50.72:8443/healthz ...
	I0229 18:59:10.551014   47515 api_server.go:279] https://192.168.50.72:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 18:59:10.551055   47515 api_server.go:103] status: https://192.168.50.72:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 18:59:11.044658   47515 api_server.go:253] Checking apiserver healthz at https://192.168.50.72:8443/healthz ...
	I0229 18:59:11.051170   47515 api_server.go:279] https://192.168.50.72:8443/healthz returned 200:
	ok
	I0229 18:59:11.059048   47515 api_server.go:141] control plane version: v1.29.0-rc.2
	I0229 18:59:11.059076   47515 api_server.go:131] duration metric: took 4.015363545s to wait for apiserver health ...
	I0229 18:59:11.059085   47515 cni.go:84] Creating CNI manager for ""
	I0229 18:59:11.059092   47515 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 18:59:11.060915   47515 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 18:59:11.062158   47515 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 18:59:11.076961   47515 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 18:59:11.109344   47515 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 18:59:11.129625   47515 system_pods.go:59] 8 kube-system pods found
	I0229 18:59:11.129659   47515 system_pods.go:61] "coredns-76f75df574-dfrds" [ab7ce7e3-0532-48a1-9177-00e554d7e5af] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 18:59:11.129668   47515 system_pods.go:61] "etcd-no-preload-247197" [e37e6d4c-5039-484e-98af-553ade3ba60f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0229 18:59:11.129679   47515 system_pods.go:61] "kube-apiserver-no-preload-247197" [933648a9-115f-4c2a-b699-48ef7409331c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0229 18:59:11.129692   47515 system_pods.go:61] "kube-controller-manager-no-preload-247197" [b87a4a06-8a47-4cdf-a5e7-85f967e6332a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0229 18:59:11.129699   47515 system_pods.go:61] "kube-proxy-hjm9j" [a2e6ec66-78d9-4637-bb47-3f954969813b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0229 18:59:11.129707   47515 system_pods.go:61] "kube-scheduler-no-preload-247197" [cc52dc2c-cbe0-4cf0-8a2d-99a6f1880f6d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0229 18:59:11.129717   47515 system_pods.go:61] "metrics-server-57f55c9bc5-ggf8f" [dd2986a2-20a9-499c-805a-3c28819ff2f7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 18:59:11.129726   47515 system_pods.go:61] "storage-provisioner" [22f64d5e-b947-43ed-9842-cb6e252fd4a0] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0229 18:59:11.129733   47515 system_pods.go:74] duration metric: took 20.366108ms to wait for pod list to return data ...
	I0229 18:59:11.129742   47515 node_conditions.go:102] verifying NodePressure condition ...
	I0229 18:59:11.133259   47515 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 18:59:11.133282   47515 node_conditions.go:123] node cpu capacity is 2
	I0229 18:59:11.133294   47515 node_conditions.go:105] duration metric: took 3.545943ms to run NodePressure ...
	I0229 18:59:11.133313   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:59:11.618536   47515 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0229 18:59:11.625628   47515 kubeadm.go:787] kubelet initialised
	I0229 18:59:11.625649   47515 kubeadm.go:788] duration metric: took 7.089584ms waiting for restarted kubelet to initialise ...
	I0229 18:59:11.625661   47515 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 18:59:11.641122   47515 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-dfrds" in "kube-system" namespace to be "Ready" ...
	I0229 18:59:09.515059   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:11.515286   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:14.013214   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:12.465591   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:12.479774   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:12.479825   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:12.517591   47919 cri.go:89] found id: ""
	I0229 18:59:12.517620   47919 logs.go:276] 0 containers: []
	W0229 18:59:12.517630   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:12.517637   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:12.517693   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:12.560735   47919 cri.go:89] found id: ""
	I0229 18:59:12.560758   47919 logs.go:276] 0 containers: []
	W0229 18:59:12.560769   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:12.560776   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:12.560843   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:12.600002   47919 cri.go:89] found id: ""
	I0229 18:59:12.600025   47919 logs.go:276] 0 containers: []
	W0229 18:59:12.600033   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:12.600043   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:12.600088   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:12.639223   47919 cri.go:89] found id: ""
	I0229 18:59:12.639252   47919 logs.go:276] 0 containers: []
	W0229 18:59:12.639264   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:12.639272   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:12.639339   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:12.682491   47919 cri.go:89] found id: ""
	I0229 18:59:12.682514   47919 logs.go:276] 0 containers: []
	W0229 18:59:12.682524   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:12.682531   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:12.682590   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:12.720669   47919 cri.go:89] found id: ""
	I0229 18:59:12.720693   47919 logs.go:276] 0 containers: []
	W0229 18:59:12.720700   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:12.720706   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:12.720773   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:12.764880   47919 cri.go:89] found id: ""
	I0229 18:59:12.764908   47919 logs.go:276] 0 containers: []
	W0229 18:59:12.764919   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:12.764926   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:12.765011   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:12.808987   47919 cri.go:89] found id: ""
	I0229 18:59:12.809019   47919 logs.go:276] 0 containers: []
	W0229 18:59:12.809052   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:12.809064   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:12.809079   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:12.866228   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:12.866263   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:12.886698   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:12.886729   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:12.963092   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:12.963116   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:12.963129   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:13.034485   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:13.034524   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:11.586793   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:14.081742   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:13.648688   47515 pod_ready.go:102] pod "coredns-76f75df574-dfrds" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:15.648876   47515 pod_ready.go:102] pod "coredns-76f75df574-dfrds" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:17.649478   47515 pod_ready.go:102] pod "coredns-76f75df574-dfrds" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:16.015395   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:18.015919   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:15.588224   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:15.603293   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:15.603368   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:15.648746   47919 cri.go:89] found id: ""
	I0229 18:59:15.648771   47919 logs.go:276] 0 containers: []
	W0229 18:59:15.648781   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:15.648788   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:15.648850   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:15.686420   47919 cri.go:89] found id: ""
	I0229 18:59:15.686447   47919 logs.go:276] 0 containers: []
	W0229 18:59:15.686463   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:15.686470   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:15.686533   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:15.729410   47919 cri.go:89] found id: ""
	I0229 18:59:15.729439   47919 logs.go:276] 0 containers: []
	W0229 18:59:15.729451   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:15.729458   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:15.729526   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:15.768078   47919 cri.go:89] found id: ""
	I0229 18:59:15.768108   47919 logs.go:276] 0 containers: []
	W0229 18:59:15.768119   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:15.768127   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:15.768188   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:15.806725   47919 cri.go:89] found id: ""
	I0229 18:59:15.806753   47919 logs.go:276] 0 containers: []
	W0229 18:59:15.806765   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:15.806772   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:15.806845   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:15.848566   47919 cri.go:89] found id: ""
	I0229 18:59:15.848593   47919 logs.go:276] 0 containers: []
	W0229 18:59:15.848600   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:15.848606   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:15.848657   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:15.888907   47919 cri.go:89] found id: ""
	I0229 18:59:15.888930   47919 logs.go:276] 0 containers: []
	W0229 18:59:15.888942   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:15.888948   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:15.889009   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:15.926653   47919 cri.go:89] found id: ""
	I0229 18:59:15.926686   47919 logs.go:276] 0 containers: []
	W0229 18:59:15.926708   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:15.926729   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:15.926746   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:15.976773   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:15.976812   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:15.995440   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:15.995477   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:16.103753   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:16.103774   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:16.103786   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:16.188282   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:16.188319   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:18.733451   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:18.748528   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:18.748607   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:18.785998   47919 cri.go:89] found id: ""
	I0229 18:59:18.786055   47919 logs.go:276] 0 containers: []
	W0229 18:59:18.786069   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:18.786078   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:18.786144   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:18.824234   47919 cri.go:89] found id: ""
	I0229 18:59:18.824260   47919 logs.go:276] 0 containers: []
	W0229 18:59:18.824270   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:18.824277   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:18.824339   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:18.868586   47919 cri.go:89] found id: ""
	I0229 18:59:18.868615   47919 logs.go:276] 0 containers: []
	W0229 18:59:18.868626   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:18.868633   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:18.868696   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:18.912622   47919 cri.go:89] found id: ""
	I0229 18:59:18.912647   47919 logs.go:276] 0 containers: []
	W0229 18:59:18.912655   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:18.912661   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:18.912708   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:18.952001   47919 cri.go:89] found id: ""
	I0229 18:59:18.952029   47919 logs.go:276] 0 containers: []
	W0229 18:59:18.952040   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:18.952047   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:18.952108   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:18.993085   47919 cri.go:89] found id: ""
	I0229 18:59:18.993130   47919 logs.go:276] 0 containers: []
	W0229 18:59:18.993140   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:18.993148   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:18.993209   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:19.041498   47919 cri.go:89] found id: ""
	I0229 18:59:19.041524   47919 logs.go:276] 0 containers: []
	W0229 18:59:19.041536   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:19.041543   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:19.041601   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:19.099107   47919 cri.go:89] found id: ""
	I0229 18:59:19.099132   47919 logs.go:276] 0 containers: []
	W0229 18:59:19.099143   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:19.099153   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:19.099169   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:19.158578   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:19.158615   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:19.173561   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:19.173590   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:19.248498   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:19.248524   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:19.248540   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:19.326904   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:19.326939   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:16.085349   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:18.582796   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:20.148468   47515 pod_ready.go:102] pod "coredns-76f75df574-dfrds" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:21.648188   47515 pod_ready.go:92] pod "coredns-76f75df574-dfrds" in "kube-system" namespace has status "Ready":"True"
	I0229 18:59:21.648214   47515 pod_ready.go:81] duration metric: took 10.0070638s waiting for pod "coredns-76f75df574-dfrds" in "kube-system" namespace to be "Ready" ...
	I0229 18:59:21.648225   47515 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-247197" in "kube-system" namespace to be "Ready" ...
	I0229 18:59:20.514234   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:22.514669   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:21.877087   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:21.892919   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:21.892976   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:21.931119   47919 cri.go:89] found id: ""
	I0229 18:59:21.931147   47919 logs.go:276] 0 containers: []
	W0229 18:59:21.931159   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:21.931167   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:21.931227   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:21.971884   47919 cri.go:89] found id: ""
	I0229 18:59:21.971908   47919 logs.go:276] 0 containers: []
	W0229 18:59:21.971916   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:21.971921   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:21.971975   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:22.019170   47919 cri.go:89] found id: ""
	I0229 18:59:22.019206   47919 logs.go:276] 0 containers: []
	W0229 18:59:22.019216   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:22.019232   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:22.019311   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:22.078057   47919 cri.go:89] found id: ""
	I0229 18:59:22.078083   47919 logs.go:276] 0 containers: []
	W0229 18:59:22.078093   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:22.078100   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:22.078162   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:22.128112   47919 cri.go:89] found id: ""
	I0229 18:59:22.128141   47919 logs.go:276] 0 containers: []
	W0229 18:59:22.128151   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:22.128157   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:22.128214   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:22.171354   47919 cri.go:89] found id: ""
	I0229 18:59:22.171382   47919 logs.go:276] 0 containers: []
	W0229 18:59:22.171393   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:22.171400   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:22.171466   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:22.225620   47919 cri.go:89] found id: ""
	I0229 18:59:22.225642   47919 logs.go:276] 0 containers: []
	W0229 18:59:22.225651   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:22.225658   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:22.225718   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:22.271291   47919 cri.go:89] found id: ""
	I0229 18:59:22.271320   47919 logs.go:276] 0 containers: []
	W0229 18:59:22.271332   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:22.271343   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:22.271358   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:22.336735   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:22.336765   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:22.354397   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:22.354425   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:22.432691   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:22.432713   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:22.432727   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:22.520239   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:22.520268   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:20.587039   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:23.084979   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:25.086225   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:23.657675   47515 pod_ready.go:102] pod "etcd-no-preload-247197" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:25.656013   47515 pod_ready.go:92] pod "etcd-no-preload-247197" in "kube-system" namespace has status "Ready":"True"
	I0229 18:59:25.656050   47515 pod_ready.go:81] duration metric: took 4.007810687s waiting for pod "etcd-no-preload-247197" in "kube-system" namespace to be "Ready" ...
	I0229 18:59:25.656064   47515 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-247197" in "kube-system" namespace to be "Ready" ...
	I0229 18:59:25.661235   47515 pod_ready.go:92] pod "kube-apiserver-no-preload-247197" in "kube-system" namespace has status "Ready":"True"
	I0229 18:59:25.661263   47515 pod_ready.go:81] duration metric: took 5.191999ms waiting for pod "kube-apiserver-no-preload-247197" in "kube-system" namespace to be "Ready" ...
	I0229 18:59:25.661273   47515 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-247197" in "kube-system" namespace to be "Ready" ...
	I0229 18:59:25.666649   47515 pod_ready.go:92] pod "kube-controller-manager-no-preload-247197" in "kube-system" namespace has status "Ready":"True"
	I0229 18:59:25.666672   47515 pod_ready.go:81] duration metric: took 5.388774ms waiting for pod "kube-controller-manager-no-preload-247197" in "kube-system" namespace to be "Ready" ...
	I0229 18:59:25.666680   47515 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-hjm9j" in "kube-system" namespace to be "Ready" ...
	I0229 18:59:25.672042   47515 pod_ready.go:92] pod "kube-proxy-hjm9j" in "kube-system" namespace has status "Ready":"True"
	I0229 18:59:25.672067   47515 pod_ready.go:81] duration metric: took 5.380771ms waiting for pod "kube-proxy-hjm9j" in "kube-system" namespace to be "Ready" ...
	I0229 18:59:25.672076   47515 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-247197" in "kube-system" namespace to be "Ready" ...
	I0229 18:59:25.676980   47515 pod_ready.go:92] pod "kube-scheduler-no-preload-247197" in "kube-system" namespace has status "Ready":"True"
	I0229 18:59:25.677001   47515 pod_ready.go:81] duration metric: took 4.919332ms waiting for pod "kube-scheduler-no-preload-247197" in "kube-system" namespace to be "Ready" ...
	I0229 18:59:25.677013   47515 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace to be "Ready" ...
	I0229 18:59:27.684865   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:25.017772   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:27.513975   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:25.073478   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:25.105197   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:25.105262   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:25.165700   47919 cri.go:89] found id: ""
	I0229 18:59:25.165728   47919 logs.go:276] 0 containers: []
	W0229 18:59:25.165737   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:25.165744   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:25.165810   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:25.210864   47919 cri.go:89] found id: ""
	I0229 18:59:25.210892   47919 logs.go:276] 0 containers: []
	W0229 18:59:25.210904   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:25.210911   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:25.210974   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:25.257785   47919 cri.go:89] found id: ""
	I0229 18:59:25.257810   47919 logs.go:276] 0 containers: []
	W0229 18:59:25.257820   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:25.257827   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:25.257888   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:25.299816   47919 cri.go:89] found id: ""
	I0229 18:59:25.299844   47919 logs.go:276] 0 containers: []
	W0229 18:59:25.299855   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:25.299863   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:25.299933   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:25.339711   47919 cri.go:89] found id: ""
	I0229 18:59:25.339737   47919 logs.go:276] 0 containers: []
	W0229 18:59:25.339746   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:25.339751   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:25.339805   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:25.381107   47919 cri.go:89] found id: ""
	I0229 18:59:25.381135   47919 logs.go:276] 0 containers: []
	W0229 18:59:25.381145   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:25.381153   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:25.381211   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:25.429029   47919 cri.go:89] found id: ""
	I0229 18:59:25.429054   47919 logs.go:276] 0 containers: []
	W0229 18:59:25.429064   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:25.429071   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:25.429130   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:25.470598   47919 cri.go:89] found id: ""
	I0229 18:59:25.470629   47919 logs.go:276] 0 containers: []
	W0229 18:59:25.470637   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:25.470644   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:25.470655   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:25.516439   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:25.516476   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:25.569170   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:25.569204   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:25.584405   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:25.584430   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:25.663650   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:25.663671   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:25.663686   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:28.248036   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:28.263367   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:28.263440   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:28.302232   47919 cri.go:89] found id: ""
	I0229 18:59:28.302259   47919 logs.go:276] 0 containers: []
	W0229 18:59:28.302273   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:28.302281   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:28.302340   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:28.345147   47919 cri.go:89] found id: ""
	I0229 18:59:28.345173   47919 logs.go:276] 0 containers: []
	W0229 18:59:28.345185   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:28.345192   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:28.345250   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:28.383671   47919 cri.go:89] found id: ""
	I0229 18:59:28.383690   47919 logs.go:276] 0 containers: []
	W0229 18:59:28.383702   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:28.383709   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:28.383762   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:28.423737   47919 cri.go:89] found id: ""
	I0229 18:59:28.423762   47919 logs.go:276] 0 containers: []
	W0229 18:59:28.423769   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:28.423774   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:28.423826   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:28.465679   47919 cri.go:89] found id: ""
	I0229 18:59:28.465705   47919 logs.go:276] 0 containers: []
	W0229 18:59:28.465715   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:28.465723   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:28.465775   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:28.509703   47919 cri.go:89] found id: ""
	I0229 18:59:28.509731   47919 logs.go:276] 0 containers: []
	W0229 18:59:28.509742   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:28.509754   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:28.509826   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:28.549981   47919 cri.go:89] found id: ""
	I0229 18:59:28.550010   47919 logs.go:276] 0 containers: []
	W0229 18:59:28.550021   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:28.550027   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:28.550093   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:28.589802   47919 cri.go:89] found id: ""
	I0229 18:59:28.589827   47919 logs.go:276] 0 containers: []
	W0229 18:59:28.589834   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:28.589841   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:28.589853   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:28.670623   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:28.670644   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:28.670655   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:28.765451   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:28.765484   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:28.821538   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:28.821571   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:28.889401   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:28.889438   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:27.583470   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:29.584344   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:30.184242   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:32.184867   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:29.514804   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:31.516473   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:34.013518   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:31.406911   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:31.422464   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:31.422541   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:31.460701   47919 cri.go:89] found id: ""
	I0229 18:59:31.460744   47919 logs.go:276] 0 containers: []
	W0229 18:59:31.460755   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:31.460762   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:31.460822   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:31.506966   47919 cri.go:89] found id: ""
	I0229 18:59:31.506996   47919 logs.go:276] 0 containers: []
	W0229 18:59:31.507007   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:31.507013   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:31.507088   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:31.542582   47919 cri.go:89] found id: ""
	I0229 18:59:31.542611   47919 logs.go:276] 0 containers: []
	W0229 18:59:31.542623   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:31.542631   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:31.542693   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:31.585470   47919 cri.go:89] found id: ""
	I0229 18:59:31.585496   47919 logs.go:276] 0 containers: []
	W0229 18:59:31.585508   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:31.585516   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:31.585574   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:31.627751   47919 cri.go:89] found id: ""
	I0229 18:59:31.627785   47919 logs.go:276] 0 containers: []
	W0229 18:59:31.627797   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:31.627805   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:31.627864   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:31.665988   47919 cri.go:89] found id: ""
	I0229 18:59:31.666009   47919 logs.go:276] 0 containers: []
	W0229 18:59:31.666017   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:31.666023   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:31.666081   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:31.712553   47919 cri.go:89] found id: ""
	I0229 18:59:31.712583   47919 logs.go:276] 0 containers: []
	W0229 18:59:31.712597   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:31.712603   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:31.712659   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:31.749904   47919 cri.go:89] found id: ""
	I0229 18:59:31.749944   47919 logs.go:276] 0 containers: []
	W0229 18:59:31.749954   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:31.749965   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:31.749980   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:31.843949   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:31.843992   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:31.898158   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:31.898186   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:31.949798   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:31.949831   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:31.965666   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:31.965697   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:32.040368   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:34.541417   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:34.558286   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:34.558345   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:34.602083   47919 cri.go:89] found id: ""
	I0229 18:59:34.602113   47919 logs.go:276] 0 containers: []
	W0229 18:59:34.602123   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:34.602130   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:34.602200   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:34.647108   47919 cri.go:89] found id: ""
	I0229 18:59:34.647136   47919 logs.go:276] 0 containers: []
	W0229 18:59:34.647146   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:34.647151   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:34.647220   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:34.692920   47919 cri.go:89] found id: ""
	I0229 18:59:34.692942   47919 logs.go:276] 0 containers: []
	W0229 18:59:34.692950   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:34.692956   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:34.693000   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:34.739367   47919 cri.go:89] found id: ""
	I0229 18:59:34.739397   47919 logs.go:276] 0 containers: []
	W0229 18:59:34.739408   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:34.739416   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:34.739478   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:34.794083   47919 cri.go:89] found id: ""
	I0229 18:59:34.794106   47919 logs.go:276] 0 containers: []
	W0229 18:59:34.794114   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:34.794120   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:34.794179   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:34.865371   47919 cri.go:89] found id: ""
	I0229 18:59:34.865400   47919 logs.go:276] 0 containers: []
	W0229 18:59:34.865412   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:34.865419   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:34.865476   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:34.906957   47919 cri.go:89] found id: ""
	I0229 18:59:34.906986   47919 logs.go:276] 0 containers: []
	W0229 18:59:34.906994   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:34.906999   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:34.907063   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:31.584743   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:34.085375   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:34.684397   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:37.183641   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:36.015759   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:38.514451   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:34.948548   47919 cri.go:89] found id: ""
	I0229 18:59:34.948570   47919 logs.go:276] 0 containers: []
	W0229 18:59:34.948577   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:34.948586   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:34.948598   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:35.036558   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:35.036594   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:35.080137   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:35.080169   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:35.130408   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:35.130436   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:35.148306   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:35.148332   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:35.222648   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:37.723158   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:37.741809   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:37.741885   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:37.787147   47919 cri.go:89] found id: ""
	I0229 18:59:37.787177   47919 logs.go:276] 0 containers: []
	W0229 18:59:37.787184   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:37.787192   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:37.787249   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:37.835589   47919 cri.go:89] found id: ""
	I0229 18:59:37.835613   47919 logs.go:276] 0 containers: []
	W0229 18:59:37.835623   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:37.835630   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:37.835687   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:37.895088   47919 cri.go:89] found id: ""
	I0229 18:59:37.895118   47919 logs.go:276] 0 containers: []
	W0229 18:59:37.895130   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:37.895137   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:37.895194   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:37.940837   47919 cri.go:89] found id: ""
	I0229 18:59:37.940867   47919 logs.go:276] 0 containers: []
	W0229 18:59:37.940878   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:37.940886   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:37.940946   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:37.989155   47919 cri.go:89] found id: ""
	I0229 18:59:37.989183   47919 logs.go:276] 0 containers: []
	W0229 18:59:37.989194   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:37.989203   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:37.989267   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:38.026517   47919 cri.go:89] found id: ""
	I0229 18:59:38.026543   47919 logs.go:276] 0 containers: []
	W0229 18:59:38.026553   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:38.026560   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:38.026623   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:38.063299   47919 cri.go:89] found id: ""
	I0229 18:59:38.063328   47919 logs.go:276] 0 containers: []
	W0229 18:59:38.063340   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:38.063347   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:38.063393   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:38.106278   47919 cri.go:89] found id: ""
	I0229 18:59:38.106298   47919 logs.go:276] 0 containers: []
	W0229 18:59:38.106305   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:38.106315   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:38.106330   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:38.182985   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:38.183008   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:38.183038   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:38.260280   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:38.260312   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:38.303648   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:38.303678   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:38.352889   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:38.352931   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:36.583258   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:38.583878   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:39.185221   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:41.684957   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:40.515303   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:43.017529   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:40.870416   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:40.885618   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:40.885692   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:40.924088   47919 cri.go:89] found id: ""
	I0229 18:59:40.924115   47919 logs.go:276] 0 containers: []
	W0229 18:59:40.924126   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:40.924133   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:40.924192   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:40.959485   47919 cri.go:89] found id: ""
	I0229 18:59:40.959513   47919 logs.go:276] 0 containers: []
	W0229 18:59:40.959524   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:40.959532   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:40.959593   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:41.009453   47919 cri.go:89] found id: ""
	I0229 18:59:41.009478   47919 logs.go:276] 0 containers: []
	W0229 18:59:41.009489   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:41.009496   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:41.009552   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:41.052894   47919 cri.go:89] found id: ""
	I0229 18:59:41.052922   47919 logs.go:276] 0 containers: []
	W0229 18:59:41.052933   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:41.052940   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:41.052997   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:41.098299   47919 cri.go:89] found id: ""
	I0229 18:59:41.098328   47919 logs.go:276] 0 containers: []
	W0229 18:59:41.098338   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:41.098345   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:41.098460   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:41.138287   47919 cri.go:89] found id: ""
	I0229 18:59:41.138313   47919 logs.go:276] 0 containers: []
	W0229 18:59:41.138324   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:41.138333   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:41.138395   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:41.176482   47919 cri.go:89] found id: ""
	I0229 18:59:41.176512   47919 logs.go:276] 0 containers: []
	W0229 18:59:41.176522   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:41.176529   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:41.176598   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:41.215284   47919 cri.go:89] found id: ""
	I0229 18:59:41.215307   47919 logs.go:276] 0 containers: []
	W0229 18:59:41.215317   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:41.215327   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:41.215342   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:41.230954   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:41.230982   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:41.313672   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:41.313696   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:41.313713   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:41.393574   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:41.393610   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:41.443384   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:41.443422   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:43.994323   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:44.008821   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:44.008892   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:44.050088   47919 cri.go:89] found id: ""
	I0229 18:59:44.050116   47919 logs.go:276] 0 containers: []
	W0229 18:59:44.050124   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:44.050130   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:44.050207   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:44.089721   47919 cri.go:89] found id: ""
	I0229 18:59:44.089749   47919 logs.go:276] 0 containers: []
	W0229 18:59:44.089756   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:44.089762   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:44.089818   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:44.132366   47919 cri.go:89] found id: ""
	I0229 18:59:44.132398   47919 logs.go:276] 0 containers: []
	W0229 18:59:44.132407   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:44.132412   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:44.132468   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:44.173568   47919 cri.go:89] found id: ""
	I0229 18:59:44.173591   47919 logs.go:276] 0 containers: []
	W0229 18:59:44.173598   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:44.173604   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:44.173661   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:44.214660   47919 cri.go:89] found id: ""
	I0229 18:59:44.214683   47919 logs.go:276] 0 containers: []
	W0229 18:59:44.214691   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:44.214696   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:44.214747   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:44.254355   47919 cri.go:89] found id: ""
	I0229 18:59:44.254386   47919 logs.go:276] 0 containers: []
	W0229 18:59:44.254397   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:44.254405   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:44.254464   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:44.293548   47919 cri.go:89] found id: ""
	I0229 18:59:44.293573   47919 logs.go:276] 0 containers: []
	W0229 18:59:44.293584   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:44.293591   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:44.293652   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:44.333335   47919 cri.go:89] found id: ""
	I0229 18:59:44.333361   47919 logs.go:276] 0 containers: []
	W0229 18:59:44.333372   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:44.333383   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:44.333398   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:44.348941   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:44.348973   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:44.419949   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:44.419968   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:44.419982   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:44.503445   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:44.503479   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:44.558694   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:44.558728   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:40.584127   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:43.084271   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:43.685573   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:46.184467   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:45.513896   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:47.514467   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:47.129362   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:47.145410   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:47.145483   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:47.194037   47919 cri.go:89] found id: ""
	I0229 18:59:47.194073   47919 logs.go:276] 0 containers: []
	W0229 18:59:47.194092   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:47.194100   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:47.194160   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:47.232500   47919 cri.go:89] found id: ""
	I0229 18:59:47.232528   47919 logs.go:276] 0 containers: []
	W0229 18:59:47.232559   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:47.232568   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:47.232634   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:47.271452   47919 cri.go:89] found id: ""
	I0229 18:59:47.271485   47919 logs.go:276] 0 containers: []
	W0229 18:59:47.271494   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:47.271501   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:47.271561   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:47.313482   47919 cri.go:89] found id: ""
	I0229 18:59:47.313509   47919 logs.go:276] 0 containers: []
	W0229 18:59:47.313520   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:47.313527   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:47.313590   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:47.354958   47919 cri.go:89] found id: ""
	I0229 18:59:47.354988   47919 logs.go:276] 0 containers: []
	W0229 18:59:47.354996   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:47.355001   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:47.355092   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:47.393312   47919 cri.go:89] found id: ""
	I0229 18:59:47.393338   47919 logs.go:276] 0 containers: []
	W0229 18:59:47.393349   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:47.393356   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:47.393415   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:47.431370   47919 cri.go:89] found id: ""
	I0229 18:59:47.431396   47919 logs.go:276] 0 containers: []
	W0229 18:59:47.431406   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:47.431413   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:47.431471   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:47.471659   47919 cri.go:89] found id: ""
	I0229 18:59:47.471683   47919 logs.go:276] 0 containers: []
	W0229 18:59:47.471692   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:47.471702   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:47.471715   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:47.530365   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:47.530405   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:47.558874   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:47.558903   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:47.644009   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:47.644033   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:47.644047   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:47.730063   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:47.730095   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:45.583524   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:47.585620   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:50.083189   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:48.684211   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:50.686885   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:49.514667   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:52.014092   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:50.272945   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:50.288718   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:50.288796   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:50.331460   47919 cri.go:89] found id: ""
	I0229 18:59:50.331482   47919 logs.go:276] 0 containers: []
	W0229 18:59:50.331489   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:50.331495   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:50.331543   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:50.374960   47919 cri.go:89] found id: ""
	I0229 18:59:50.374989   47919 logs.go:276] 0 containers: []
	W0229 18:59:50.375000   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:50.375006   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:50.375076   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:50.415073   47919 cri.go:89] found id: ""
	I0229 18:59:50.415095   47919 logs.go:276] 0 containers: []
	W0229 18:59:50.415102   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:50.415107   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:50.415157   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:50.452511   47919 cri.go:89] found id: ""
	I0229 18:59:50.452554   47919 logs.go:276] 0 containers: []
	W0229 18:59:50.452563   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:50.452568   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:50.452612   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:50.498103   47919 cri.go:89] found id: ""
	I0229 18:59:50.498125   47919 logs.go:276] 0 containers: []
	W0229 18:59:50.498132   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:50.498137   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:50.498193   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:50.545366   47919 cri.go:89] found id: ""
	I0229 18:59:50.545397   47919 logs.go:276] 0 containers: []
	W0229 18:59:50.545409   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:50.545417   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:50.545487   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:50.608215   47919 cri.go:89] found id: ""
	I0229 18:59:50.608239   47919 logs.go:276] 0 containers: []
	W0229 18:59:50.608250   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:50.608257   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:50.608314   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:50.660835   47919 cri.go:89] found id: ""
	I0229 18:59:50.660861   47919 logs.go:276] 0 containers: []
	W0229 18:59:50.660881   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:50.660892   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:50.660907   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:50.749671   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:50.749712   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:50.797567   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:50.797595   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:50.848022   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:50.848059   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:50.862797   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:50.862820   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:50.934682   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:53.435804   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:53.451364   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:53.451440   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:53.500680   47919 cri.go:89] found id: ""
	I0229 18:59:53.500706   47919 logs.go:276] 0 containers: []
	W0229 18:59:53.500717   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:53.500744   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:53.500797   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:53.565306   47919 cri.go:89] found id: ""
	I0229 18:59:53.565334   47919 logs.go:276] 0 containers: []
	W0229 18:59:53.565344   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:53.565351   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:53.565410   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:53.631438   47919 cri.go:89] found id: ""
	I0229 18:59:53.631461   47919 logs.go:276] 0 containers: []
	W0229 18:59:53.631479   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:53.631486   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:53.631554   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:53.679482   47919 cri.go:89] found id: ""
	I0229 18:59:53.679506   47919 logs.go:276] 0 containers: []
	W0229 18:59:53.679516   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:53.679524   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:53.679580   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:53.722098   47919 cri.go:89] found id: ""
	I0229 18:59:53.722125   47919 logs.go:276] 0 containers: []
	W0229 18:59:53.722135   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:53.722142   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:53.722211   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:53.761804   47919 cri.go:89] found id: ""
	I0229 18:59:53.761838   47919 logs.go:276] 0 containers: []
	W0229 18:59:53.761849   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:53.761858   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:53.761942   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:53.806109   47919 cri.go:89] found id: ""
	I0229 18:59:53.806137   47919 logs.go:276] 0 containers: []
	W0229 18:59:53.806149   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:53.806157   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:53.806219   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:53.856794   47919 cri.go:89] found id: ""
	I0229 18:59:53.856823   47919 logs.go:276] 0 containers: []
	W0229 18:59:53.856831   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:53.856839   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:53.856849   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:53.908216   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:53.908252   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:53.923999   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:53.924038   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:54.000750   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:54.000772   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:54.000783   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:54.086840   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:54.086870   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:52.083751   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:54.586556   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:53.184426   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:55.683893   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:57.685129   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:54.513193   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:56.515925   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:59.013745   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:56.630728   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:56.647368   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:56.647440   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:56.693706   47919 cri.go:89] found id: ""
	I0229 18:59:56.693726   47919 logs.go:276] 0 containers: []
	W0229 18:59:56.693733   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:56.693738   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:56.693780   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:56.733377   47919 cri.go:89] found id: ""
	I0229 18:59:56.733404   47919 logs.go:276] 0 containers: []
	W0229 18:59:56.733415   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:56.733423   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:56.733491   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:56.772186   47919 cri.go:89] found id: ""
	I0229 18:59:56.772209   47919 logs.go:276] 0 containers: []
	W0229 18:59:56.772216   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:56.772222   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:56.772267   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:56.811919   47919 cri.go:89] found id: ""
	I0229 18:59:56.811964   47919 logs.go:276] 0 containers: []
	W0229 18:59:56.811977   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:56.811984   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:56.812035   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:56.849345   47919 cri.go:89] found id: ""
	I0229 18:59:56.849372   47919 logs.go:276] 0 containers: []
	W0229 18:59:56.849383   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:56.849390   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:56.849447   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:56.900091   47919 cri.go:89] found id: ""
	I0229 18:59:56.900119   47919 logs.go:276] 0 containers: []
	W0229 18:59:56.900129   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:56.900136   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:56.900193   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:56.937662   47919 cri.go:89] found id: ""
	I0229 18:59:56.937692   47919 logs.go:276] 0 containers: []
	W0229 18:59:56.937703   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:56.937710   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:56.937772   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:56.978195   47919 cri.go:89] found id: ""
	I0229 18:59:56.978224   47919 logs.go:276] 0 containers: []
	W0229 18:59:56.978234   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:56.978244   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:56.978259   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:57.059190   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:57.059223   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:57.101416   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:57.101442   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:57.156102   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:57.156140   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:57.171401   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:57.171435   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:57.243717   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:59.744588   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:59.760099   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:59.760175   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:59.798722   47919 cri.go:89] found id: ""
	I0229 18:59:59.798751   47919 logs.go:276] 0 containers: []
	W0229 18:59:59.798762   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:59.798770   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:59.798830   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:59.842423   47919 cri.go:89] found id: ""
	I0229 18:59:59.842452   47919 logs.go:276] 0 containers: []
	W0229 18:59:59.842463   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:59.842470   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:59.842532   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:59.883742   47919 cri.go:89] found id: ""
	I0229 18:59:59.883768   47919 logs.go:276] 0 containers: []
	W0229 18:59:59.883775   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:59.883781   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:59.883826   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:59.924062   47919 cri.go:89] found id: ""
	I0229 18:59:59.924091   47919 logs.go:276] 0 containers: []
	W0229 18:59:59.924102   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:59.924109   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:59.924166   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:56.587621   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:59.087882   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:59.685911   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:02.185406   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:01.014202   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:03.014972   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:59.962465   47919 cri.go:89] found id: ""
	I0229 18:59:59.962497   47919 logs.go:276] 0 containers: []
	W0229 18:59:59.962508   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:59.962515   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:59.962576   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:00.006069   47919 cri.go:89] found id: ""
	I0229 19:00:00.006103   47919 logs.go:276] 0 containers: []
	W0229 19:00:00.006114   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:00.006123   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:00.006185   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:00.047671   47919 cri.go:89] found id: ""
	I0229 19:00:00.047697   47919 logs.go:276] 0 containers: []
	W0229 19:00:00.047709   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:00.047715   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:00.047773   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:00.091452   47919 cri.go:89] found id: ""
	I0229 19:00:00.091475   47919 logs.go:276] 0 containers: []
	W0229 19:00:00.091486   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:00.091497   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:00.091511   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:00.143282   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:00.143313   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:00.158342   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:00.158366   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:00.239745   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:00.239774   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:00.239792   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:00.339048   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:00.339083   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:02.898414   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:02.914154   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:02.914221   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:02.956122   47919 cri.go:89] found id: ""
	I0229 19:00:02.956151   47919 logs.go:276] 0 containers: []
	W0229 19:00:02.956211   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:02.956225   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:02.956272   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:02.993609   47919 cri.go:89] found id: ""
	I0229 19:00:02.993636   47919 logs.go:276] 0 containers: []
	W0229 19:00:02.993646   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:02.993659   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:02.993720   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:03.038131   47919 cri.go:89] found id: ""
	I0229 19:00:03.038152   47919 logs.go:276] 0 containers: []
	W0229 19:00:03.038160   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:03.038165   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:03.038217   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:03.090845   47919 cri.go:89] found id: ""
	I0229 19:00:03.090866   47919 logs.go:276] 0 containers: []
	W0229 19:00:03.090873   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:03.090878   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:03.090935   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:03.129520   47919 cri.go:89] found id: ""
	I0229 19:00:03.129549   47919 logs.go:276] 0 containers: []
	W0229 19:00:03.129561   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:03.129568   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:03.129620   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:03.178528   47919 cri.go:89] found id: ""
	I0229 19:00:03.178557   47919 logs.go:276] 0 containers: []
	W0229 19:00:03.178567   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:03.178575   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:03.178631   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:03.218337   47919 cri.go:89] found id: ""
	I0229 19:00:03.218357   47919 logs.go:276] 0 containers: []
	W0229 19:00:03.218364   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:03.218369   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:03.218417   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:03.267682   47919 cri.go:89] found id: ""
	I0229 19:00:03.267713   47919 logs.go:276] 0 containers: []
	W0229 19:00:03.267726   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:03.267735   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:03.267753   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:03.286961   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:03.286987   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:03.376514   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:03.376535   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:03.376546   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:03.459824   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:03.459872   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:03.505821   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:03.505848   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:01.582954   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:03.583198   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:04.684892   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:06.685508   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:05.015836   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:07.514376   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:06.062525   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:06.077637   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:06.077708   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:06.119344   47919 cri.go:89] found id: ""
	I0229 19:00:06.119368   47919 logs.go:276] 0 containers: []
	W0229 19:00:06.119376   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:06.119381   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:06.119430   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:06.158209   47919 cri.go:89] found id: ""
	I0229 19:00:06.158232   47919 logs.go:276] 0 containers: []
	W0229 19:00:06.158239   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:06.158245   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:06.158291   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:06.198521   47919 cri.go:89] found id: ""
	I0229 19:00:06.198545   47919 logs.go:276] 0 containers: []
	W0229 19:00:06.198553   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:06.198559   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:06.198609   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:06.235872   47919 cri.go:89] found id: ""
	I0229 19:00:06.235919   47919 logs.go:276] 0 containers: []
	W0229 19:00:06.235930   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:06.235937   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:06.235998   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:06.282814   47919 cri.go:89] found id: ""
	I0229 19:00:06.282841   47919 logs.go:276] 0 containers: []
	W0229 19:00:06.282853   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:06.282860   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:06.282928   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:06.330549   47919 cri.go:89] found id: ""
	I0229 19:00:06.330572   47919 logs.go:276] 0 containers: []
	W0229 19:00:06.330580   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:06.330585   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:06.330632   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:06.399968   47919 cri.go:89] found id: ""
	I0229 19:00:06.399996   47919 logs.go:276] 0 containers: []
	W0229 19:00:06.400006   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:06.400012   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:06.400062   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:06.444899   47919 cri.go:89] found id: ""
	I0229 19:00:06.444921   47919 logs.go:276] 0 containers: []
	W0229 19:00:06.444929   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:06.444937   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:06.444950   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:06.460552   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:06.460580   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:06.532932   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:06.532956   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:06.532969   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:06.615130   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:06.615170   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:06.664499   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:06.664532   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:09.219226   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:09.236769   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:09.236829   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:09.292309   47919 cri.go:89] found id: ""
	I0229 19:00:09.292331   47919 logs.go:276] 0 containers: []
	W0229 19:00:09.292339   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:09.292345   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:09.292392   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:09.355237   47919 cri.go:89] found id: ""
	I0229 19:00:09.355259   47919 logs.go:276] 0 containers: []
	W0229 19:00:09.355267   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:09.355272   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:09.355319   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:09.397950   47919 cri.go:89] found id: ""
	I0229 19:00:09.397977   47919 logs.go:276] 0 containers: []
	W0229 19:00:09.397987   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:09.397995   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:09.398057   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:09.436751   47919 cri.go:89] found id: ""
	I0229 19:00:09.436779   47919 logs.go:276] 0 containers: []
	W0229 19:00:09.436789   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:09.436797   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:09.436862   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:09.480288   47919 cri.go:89] found id: ""
	I0229 19:00:09.480311   47919 logs.go:276] 0 containers: []
	W0229 19:00:09.480318   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:09.480324   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:09.480375   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:09.523576   47919 cri.go:89] found id: ""
	I0229 19:00:09.523599   47919 logs.go:276] 0 containers: []
	W0229 19:00:09.523606   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:09.523611   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:09.523658   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:09.562818   47919 cri.go:89] found id: ""
	I0229 19:00:09.562848   47919 logs.go:276] 0 containers: []
	W0229 19:00:09.562859   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:09.562872   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:09.562919   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:09.603331   47919 cri.go:89] found id: ""
	I0229 19:00:09.603357   47919 logs.go:276] 0 containers: []
	W0229 19:00:09.603369   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:09.603379   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:09.603393   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:09.652060   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:09.652089   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:09.668372   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:09.668394   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:09.745897   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:09.745923   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:09.745937   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:09.826981   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:09.827014   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:05.590288   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:08.083411   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:10.084324   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:09.184577   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:11.185922   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:10.015288   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:12.513820   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:12.371447   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:12.385523   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:12.385613   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:12.422038   47919 cri.go:89] found id: ""
	I0229 19:00:12.422067   47919 logs.go:276] 0 containers: []
	W0229 19:00:12.422077   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:12.422084   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:12.422155   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:12.460443   47919 cri.go:89] found id: ""
	I0229 19:00:12.460470   47919 logs.go:276] 0 containers: []
	W0229 19:00:12.460487   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:12.460495   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:12.460551   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:12.502791   47919 cri.go:89] found id: ""
	I0229 19:00:12.502820   47919 logs.go:276] 0 containers: []
	W0229 19:00:12.502830   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:12.502838   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:12.502897   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:12.540738   47919 cri.go:89] found id: ""
	I0229 19:00:12.540769   47919 logs.go:276] 0 containers: []
	W0229 19:00:12.540780   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:12.540786   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:12.540845   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:12.580041   47919 cri.go:89] found id: ""
	I0229 19:00:12.580072   47919 logs.go:276] 0 containers: []
	W0229 19:00:12.580084   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:12.580091   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:12.580151   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:12.620721   47919 cri.go:89] found id: ""
	I0229 19:00:12.620750   47919 logs.go:276] 0 containers: []
	W0229 19:00:12.620758   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:12.620763   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:12.620820   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:12.659877   47919 cri.go:89] found id: ""
	I0229 19:00:12.659906   47919 logs.go:276] 0 containers: []
	W0229 19:00:12.659917   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:12.659925   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:12.659975   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:12.699133   47919 cri.go:89] found id: ""
	I0229 19:00:12.699160   47919 logs.go:276] 0 containers: []
	W0229 19:00:12.699170   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:12.699177   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:12.699188   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:12.742164   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:12.742189   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:12.792215   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:12.792248   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:12.808322   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:12.808344   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:12.879089   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:12.879114   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:12.879129   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:12.586572   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:15.083323   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:13.687899   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:16.184671   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:14.521430   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:17.013799   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:19.014661   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:15.466778   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:15.480875   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:15.480945   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:15.525331   47919 cri.go:89] found id: ""
	I0229 19:00:15.525353   47919 logs.go:276] 0 containers: []
	W0229 19:00:15.525360   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:15.525366   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:15.525422   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:15.567787   47919 cri.go:89] found id: ""
	I0229 19:00:15.567819   47919 logs.go:276] 0 containers: []
	W0229 19:00:15.567831   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:15.567838   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:15.567923   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:15.609440   47919 cri.go:89] found id: ""
	I0229 19:00:15.609467   47919 logs.go:276] 0 containers: []
	W0229 19:00:15.609477   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:15.609484   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:15.609559   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:15.650113   47919 cri.go:89] found id: ""
	I0229 19:00:15.650142   47919 logs.go:276] 0 containers: []
	W0229 19:00:15.650153   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:15.650161   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:15.650223   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:15.691499   47919 cri.go:89] found id: ""
	I0229 19:00:15.691527   47919 logs.go:276] 0 containers: []
	W0229 19:00:15.691537   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:15.691544   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:15.691603   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:15.731199   47919 cri.go:89] found id: ""
	I0229 19:00:15.731227   47919 logs.go:276] 0 containers: []
	W0229 19:00:15.731239   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:15.731246   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:15.731324   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:15.772997   47919 cri.go:89] found id: ""
	I0229 19:00:15.773019   47919 logs.go:276] 0 containers: []
	W0229 19:00:15.773027   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:15.773032   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:15.773091   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:15.811223   47919 cri.go:89] found id: ""
	I0229 19:00:15.811244   47919 logs.go:276] 0 containers: []
	W0229 19:00:15.811252   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:15.811271   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:15.811283   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:15.862159   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:15.862196   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:15.877436   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:15.877460   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:15.948486   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:15.948513   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:15.948525   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:16.030585   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:16.030617   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:18.592020   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:18.607286   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:18.607368   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:18.647886   47919 cri.go:89] found id: ""
	I0229 19:00:18.647913   47919 logs.go:276] 0 containers: []
	W0229 19:00:18.647924   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:18.647951   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:18.648007   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:18.687394   47919 cri.go:89] found id: ""
	I0229 19:00:18.687420   47919 logs.go:276] 0 containers: []
	W0229 19:00:18.687430   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:18.687436   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:18.687491   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:18.734159   47919 cri.go:89] found id: ""
	I0229 19:00:18.734187   47919 logs.go:276] 0 containers: []
	W0229 19:00:18.734198   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:18.734205   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:18.734262   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:18.782950   47919 cri.go:89] found id: ""
	I0229 19:00:18.782989   47919 logs.go:276] 0 containers: []
	W0229 19:00:18.783000   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:18.783008   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:18.783089   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:18.818695   47919 cri.go:89] found id: ""
	I0229 19:00:18.818723   47919 logs.go:276] 0 containers: []
	W0229 19:00:18.818734   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:18.818742   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:18.818805   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:18.859479   47919 cri.go:89] found id: ""
	I0229 19:00:18.859504   47919 logs.go:276] 0 containers: []
	W0229 19:00:18.859515   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:18.859522   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:18.859580   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:18.902897   47919 cri.go:89] found id: ""
	I0229 19:00:18.902923   47919 logs.go:276] 0 containers: []
	W0229 19:00:18.902934   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:18.902942   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:18.903002   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:18.947708   47919 cri.go:89] found id: ""
	I0229 19:00:18.947731   47919 logs.go:276] 0 containers: []
	W0229 19:00:18.947742   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:18.947752   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:18.947772   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:19.025069   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:19.025092   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:19.025107   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:19.115589   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:19.115626   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:19.164930   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:19.164960   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:19.217497   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:19.217531   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:17.584961   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:20.081558   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:18.685924   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:21.184830   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:21.015314   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:23.513573   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:21.733516   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:21.748586   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:21.748648   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:21.788383   47919 cri.go:89] found id: ""
	I0229 19:00:21.788409   47919 logs.go:276] 0 containers: []
	W0229 19:00:21.788420   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:21.788429   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:21.788487   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:21.827147   47919 cri.go:89] found id: ""
	I0229 19:00:21.827176   47919 logs.go:276] 0 containers: []
	W0229 19:00:21.827187   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:21.827194   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:21.827255   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:21.867525   47919 cri.go:89] found id: ""
	I0229 19:00:21.867552   47919 logs.go:276] 0 containers: []
	W0229 19:00:21.867561   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:21.867570   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:21.867618   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:21.911542   47919 cri.go:89] found id: ""
	I0229 19:00:21.911564   47919 logs.go:276] 0 containers: []
	W0229 19:00:21.911573   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:21.911578   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:21.911629   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:21.949779   47919 cri.go:89] found id: ""
	I0229 19:00:21.949803   47919 logs.go:276] 0 containers: []
	W0229 19:00:21.949815   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:21.949821   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:21.949877   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:21.989663   47919 cri.go:89] found id: ""
	I0229 19:00:21.989692   47919 logs.go:276] 0 containers: []
	W0229 19:00:21.989701   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:21.989706   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:21.989750   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:22.040777   47919 cri.go:89] found id: ""
	I0229 19:00:22.040803   47919 logs.go:276] 0 containers: []
	W0229 19:00:22.040813   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:22.040820   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:22.040876   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:22.100661   47919 cri.go:89] found id: ""
	I0229 19:00:22.100682   47919 logs.go:276] 0 containers: []
	W0229 19:00:22.100689   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:22.100697   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:22.100707   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:22.165652   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:22.165682   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:22.180278   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:22.180301   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:22.250220   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:22.250242   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:22.250254   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:22.339122   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:22.339160   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:24.894485   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:24.910480   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:24.910555   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:22.086489   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:24.582331   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:23.685199   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:26.185268   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:25.514168   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:28.014178   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:24.949857   47919 cri.go:89] found id: ""
	I0229 19:00:24.949880   47919 logs.go:276] 0 containers: []
	W0229 19:00:24.949891   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:24.949898   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:24.949968   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:24.993325   47919 cri.go:89] found id: ""
	I0229 19:00:24.993355   47919 logs.go:276] 0 containers: []
	W0229 19:00:24.993366   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:24.993374   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:24.993431   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:25.053180   47919 cri.go:89] found id: ""
	I0229 19:00:25.053201   47919 logs.go:276] 0 containers: []
	W0229 19:00:25.053208   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:25.053214   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:25.053269   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:25.105886   47919 cri.go:89] found id: ""
	I0229 19:00:25.105912   47919 logs.go:276] 0 containers: []
	W0229 19:00:25.105919   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:25.105924   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:25.105969   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:25.161860   47919 cri.go:89] found id: ""
	I0229 19:00:25.161889   47919 logs.go:276] 0 containers: []
	W0229 19:00:25.161907   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:25.161918   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:25.161982   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:25.208566   47919 cri.go:89] found id: ""
	I0229 19:00:25.208591   47919 logs.go:276] 0 containers: []
	W0229 19:00:25.208601   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:25.208625   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:25.208690   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:25.252151   47919 cri.go:89] found id: ""
	I0229 19:00:25.252173   47919 logs.go:276] 0 containers: []
	W0229 19:00:25.252183   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:25.252190   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:25.252255   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:25.293860   47919 cri.go:89] found id: ""
	I0229 19:00:25.293892   47919 logs.go:276] 0 containers: []
	W0229 19:00:25.293903   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:25.293913   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:25.293926   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:25.343332   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:25.343367   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:25.357855   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:25.357883   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:25.438031   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:25.438052   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:25.438064   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:25.523752   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:25.523789   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:28.078701   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:28.103422   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:28.103514   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:28.149369   47919 cri.go:89] found id: ""
	I0229 19:00:28.149396   47919 logs.go:276] 0 containers: []
	W0229 19:00:28.149407   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:28.149414   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:28.149481   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:28.191312   47919 cri.go:89] found id: ""
	I0229 19:00:28.191340   47919 logs.go:276] 0 containers: []
	W0229 19:00:28.191350   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:28.191357   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:28.191422   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:28.232257   47919 cri.go:89] found id: ""
	I0229 19:00:28.232283   47919 logs.go:276] 0 containers: []
	W0229 19:00:28.232293   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:28.232301   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:28.232370   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:28.278477   47919 cri.go:89] found id: ""
	I0229 19:00:28.278502   47919 logs.go:276] 0 containers: []
	W0229 19:00:28.278512   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:28.278520   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:28.278580   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:28.319368   47919 cri.go:89] found id: ""
	I0229 19:00:28.319393   47919 logs.go:276] 0 containers: []
	W0229 19:00:28.319401   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:28.319406   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:28.319451   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:28.363604   47919 cri.go:89] found id: ""
	I0229 19:00:28.363628   47919 logs.go:276] 0 containers: []
	W0229 19:00:28.363636   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:28.363642   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:28.363688   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:28.403101   47919 cri.go:89] found id: ""
	I0229 19:00:28.403126   47919 logs.go:276] 0 containers: []
	W0229 19:00:28.403137   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:28.403144   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:28.403203   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:28.443915   47919 cri.go:89] found id: ""
	I0229 19:00:28.443939   47919 logs.go:276] 0 containers: []
	W0229 19:00:28.443949   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:28.443961   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:28.443974   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:28.459084   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:28.459112   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:28.531798   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:28.531827   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:28.531843   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:28.618141   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:28.618182   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:28.664993   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:28.665024   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:26.582801   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:28.584979   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:28.684541   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:31.184185   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:30.014681   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:32.513959   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:31.218793   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:31.234816   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:31.234890   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:31.273656   47919 cri.go:89] found id: ""
	I0229 19:00:31.273684   47919 logs.go:276] 0 containers: []
	W0229 19:00:31.273692   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:31.273698   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:31.273744   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:31.316292   47919 cri.go:89] found id: ""
	I0229 19:00:31.316314   47919 logs.go:276] 0 containers: []
	W0229 19:00:31.316322   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:31.316330   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:31.316391   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:31.356701   47919 cri.go:89] found id: ""
	I0229 19:00:31.356730   47919 logs.go:276] 0 containers: []
	W0229 19:00:31.356742   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:31.356760   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:31.356813   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:31.395796   47919 cri.go:89] found id: ""
	I0229 19:00:31.395822   47919 logs.go:276] 0 containers: []
	W0229 19:00:31.395830   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:31.395835   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:31.395884   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:31.436461   47919 cri.go:89] found id: ""
	I0229 19:00:31.436483   47919 logs.go:276] 0 containers: []
	W0229 19:00:31.436491   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:31.436496   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:31.436543   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:31.482802   47919 cri.go:89] found id: ""
	I0229 19:00:31.482830   47919 logs.go:276] 0 containers: []
	W0229 19:00:31.482840   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:31.482848   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:31.482895   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:31.525897   47919 cri.go:89] found id: ""
	I0229 19:00:31.525930   47919 logs.go:276] 0 containers: []
	W0229 19:00:31.525939   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:31.525949   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:31.526009   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:31.566323   47919 cri.go:89] found id: ""
	I0229 19:00:31.566350   47919 logs.go:276] 0 containers: []
	W0229 19:00:31.566362   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:31.566372   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:31.566388   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:31.618633   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:31.618674   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:31.634144   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:31.634166   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:31.712112   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:31.712136   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:31.712150   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:31.795159   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:31.795190   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:34.365419   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:34.380447   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:34.380521   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:34.422256   47919 cri.go:89] found id: ""
	I0229 19:00:34.422284   47919 logs.go:276] 0 containers: []
	W0229 19:00:34.422295   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:34.422302   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:34.422359   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:34.466548   47919 cri.go:89] found id: ""
	I0229 19:00:34.466578   47919 logs.go:276] 0 containers: []
	W0229 19:00:34.466588   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:34.466596   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:34.466654   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:34.508359   47919 cri.go:89] found id: ""
	I0229 19:00:34.508395   47919 logs.go:276] 0 containers: []
	W0229 19:00:34.508407   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:34.508414   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:34.508482   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:34.551284   47919 cri.go:89] found id: ""
	I0229 19:00:34.551308   47919 logs.go:276] 0 containers: []
	W0229 19:00:34.551319   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:34.551325   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:34.551371   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:34.593360   47919 cri.go:89] found id: ""
	I0229 19:00:34.593385   47919 logs.go:276] 0 containers: []
	W0229 19:00:34.593395   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:34.593403   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:34.593469   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:34.632097   47919 cri.go:89] found id: ""
	I0229 19:00:34.632117   47919 logs.go:276] 0 containers: []
	W0229 19:00:34.632124   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:34.632135   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:34.632180   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:34.679495   47919 cri.go:89] found id: ""
	I0229 19:00:34.679521   47919 logs.go:276] 0 containers: []
	W0229 19:00:34.679529   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:34.679534   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:34.679580   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:34.723322   47919 cri.go:89] found id: ""
	I0229 19:00:34.723351   47919 logs.go:276] 0 containers: []
	W0229 19:00:34.723361   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:34.723371   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:34.723387   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:34.741497   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:34.741525   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:34.833908   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:34.833932   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:34.833944   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:34.927172   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:34.927203   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:31.083690   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:33.583972   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:33.186129   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:35.685350   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:34.514619   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:36.514937   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:39.014137   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:34.980487   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:34.980520   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:37.535829   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:37.551274   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:37.551342   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:37.590225   47919 cri.go:89] found id: ""
	I0229 19:00:37.590263   47919 logs.go:276] 0 containers: []
	W0229 19:00:37.590282   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:37.590289   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:37.590347   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:37.630546   47919 cri.go:89] found id: ""
	I0229 19:00:37.630574   47919 logs.go:276] 0 containers: []
	W0229 19:00:37.630585   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:37.630592   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:37.630651   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:37.676219   47919 cri.go:89] found id: ""
	I0229 19:00:37.676250   47919 logs.go:276] 0 containers: []
	W0229 19:00:37.676261   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:37.676268   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:37.676329   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:37.713689   47919 cri.go:89] found id: ""
	I0229 19:00:37.713712   47919 logs.go:276] 0 containers: []
	W0229 19:00:37.713721   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:37.713729   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:37.713791   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:37.767999   47919 cri.go:89] found id: ""
	I0229 19:00:37.768034   47919 logs.go:276] 0 containers: []
	W0229 19:00:37.768049   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:37.768057   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:37.768114   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:37.816836   47919 cri.go:89] found id: ""
	I0229 19:00:37.816865   47919 logs.go:276] 0 containers: []
	W0229 19:00:37.816876   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:37.816884   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:37.816948   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:37.876044   47919 cri.go:89] found id: ""
	I0229 19:00:37.876072   47919 logs.go:276] 0 containers: []
	W0229 19:00:37.876084   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:37.876091   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:37.876151   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:37.926075   47919 cri.go:89] found id: ""
	I0229 19:00:37.926110   47919 logs.go:276] 0 containers: []
	W0229 19:00:37.926122   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:37.926132   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:37.926147   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:38.004621   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:38.004648   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:38.004663   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:38.091456   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:38.091493   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:38.140118   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:38.140144   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:38.197206   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:38.197243   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:35.587937   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:38.082516   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:40.083269   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:38.184999   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:40.684029   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:42.684537   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:41.016248   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:43.018730   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:40.713817   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:40.731550   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:40.731613   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:40.787760   47919 cri.go:89] found id: ""
	I0229 19:00:40.787788   47919 logs.go:276] 0 containers: []
	W0229 19:00:40.787798   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:40.787806   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:40.787868   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:40.847842   47919 cri.go:89] found id: ""
	I0229 19:00:40.847870   47919 logs.go:276] 0 containers: []
	W0229 19:00:40.847881   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:40.847888   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:40.847956   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:40.888452   47919 cri.go:89] found id: ""
	I0229 19:00:40.888481   47919 logs.go:276] 0 containers: []
	W0229 19:00:40.888493   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:40.888501   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:40.888562   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:40.927727   47919 cri.go:89] found id: ""
	I0229 19:00:40.927749   47919 logs.go:276] 0 containers: []
	W0229 19:00:40.927757   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:40.927762   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:40.927821   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:40.967696   47919 cri.go:89] found id: ""
	I0229 19:00:40.967725   47919 logs.go:276] 0 containers: []
	W0229 19:00:40.967737   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:40.967745   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:40.967804   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:41.008092   47919 cri.go:89] found id: ""
	I0229 19:00:41.008117   47919 logs.go:276] 0 containers: []
	W0229 19:00:41.008127   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:41.008135   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:41.008190   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:41.049235   47919 cri.go:89] found id: ""
	I0229 19:00:41.049265   47919 logs.go:276] 0 containers: []
	W0229 19:00:41.049277   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:41.049285   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:41.049393   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:41.092962   47919 cri.go:89] found id: ""
	I0229 19:00:41.092988   47919 logs.go:276] 0 containers: []
	W0229 19:00:41.092999   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:41.093018   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:41.093033   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:41.146322   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:41.146368   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:41.161961   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:41.161986   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:41.248674   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:41.248705   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:41.248732   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:41.333647   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:41.333689   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:43.882007   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:43.897786   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:43.897860   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:43.943918   47919 cri.go:89] found id: ""
	I0229 19:00:43.943946   47919 logs.go:276] 0 containers: []
	W0229 19:00:43.943955   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:43.943960   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:43.944010   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:43.988622   47919 cri.go:89] found id: ""
	I0229 19:00:43.988643   47919 logs.go:276] 0 containers: []
	W0229 19:00:43.988650   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:43.988655   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:43.988699   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:44.036419   47919 cri.go:89] found id: ""
	I0229 19:00:44.036455   47919 logs.go:276] 0 containers: []
	W0229 19:00:44.036466   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:44.036471   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:44.036530   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:44.078018   47919 cri.go:89] found id: ""
	I0229 19:00:44.078046   47919 logs.go:276] 0 containers: []
	W0229 19:00:44.078056   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:44.078063   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:44.078119   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:44.116142   47919 cri.go:89] found id: ""
	I0229 19:00:44.116168   47919 logs.go:276] 0 containers: []
	W0229 19:00:44.116177   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:44.116183   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:44.116243   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:44.158804   47919 cri.go:89] found id: ""
	I0229 19:00:44.158826   47919 logs.go:276] 0 containers: []
	W0229 19:00:44.158833   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:44.158839   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:44.158889   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:44.204069   47919 cri.go:89] found id: ""
	I0229 19:00:44.204096   47919 logs.go:276] 0 containers: []
	W0229 19:00:44.204106   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:44.204114   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:44.204173   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:44.247904   47919 cri.go:89] found id: ""
	I0229 19:00:44.247935   47919 logs.go:276] 0 containers: []
	W0229 19:00:44.247949   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:44.247959   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:44.247973   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:44.338653   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:44.338690   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:44.384041   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:44.384069   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:44.439539   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:44.439575   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:44.455345   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:44.455372   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:44.538204   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:42.083656   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:44.584493   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:45.184119   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:47.684925   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:45.513638   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:48.014638   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:47.038895   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:47.054457   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:47.054539   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:47.099854   47919 cri.go:89] found id: ""
	I0229 19:00:47.099879   47919 logs.go:276] 0 containers: []
	W0229 19:00:47.099890   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:47.099899   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:47.099956   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:47.141354   47919 cri.go:89] found id: ""
	I0229 19:00:47.141381   47919 logs.go:276] 0 containers: []
	W0229 19:00:47.141391   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:47.141398   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:47.141454   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:47.181906   47919 cri.go:89] found id: ""
	I0229 19:00:47.181932   47919 logs.go:276] 0 containers: []
	W0229 19:00:47.181942   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:47.181949   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:47.182003   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:47.222505   47919 cri.go:89] found id: ""
	I0229 19:00:47.222530   47919 logs.go:276] 0 containers: []
	W0229 19:00:47.222538   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:47.222548   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:47.222603   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:47.265567   47919 cri.go:89] found id: ""
	I0229 19:00:47.265604   47919 logs.go:276] 0 containers: []
	W0229 19:00:47.265616   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:47.265625   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:47.265690   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:47.304698   47919 cri.go:89] found id: ""
	I0229 19:00:47.304723   47919 logs.go:276] 0 containers: []
	W0229 19:00:47.304730   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:47.304736   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:47.304781   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:47.344154   47919 cri.go:89] found id: ""
	I0229 19:00:47.344175   47919 logs.go:276] 0 containers: []
	W0229 19:00:47.344182   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:47.344187   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:47.344230   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:47.383849   47919 cri.go:89] found id: ""
	I0229 19:00:47.383878   47919 logs.go:276] 0 containers: []
	W0229 19:00:47.383889   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:47.383900   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:47.383915   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:47.458895   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:47.458914   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:47.458933   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:47.547776   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:47.547823   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:47.622606   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:47.622639   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:47.685327   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:47.685356   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:47.084225   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:49.584008   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:50.186274   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:52.684452   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:50.014671   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:52.514321   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:50.202151   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:50.218008   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:50.218063   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:50.265322   47919 cri.go:89] found id: ""
	I0229 19:00:50.265345   47919 logs.go:276] 0 containers: []
	W0229 19:00:50.265353   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:50.265358   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:50.265424   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:50.305646   47919 cri.go:89] found id: ""
	I0229 19:00:50.305669   47919 logs.go:276] 0 containers: []
	W0229 19:00:50.305677   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:50.305682   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:50.305732   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:50.342855   47919 cri.go:89] found id: ""
	I0229 19:00:50.342885   47919 logs.go:276] 0 containers: []
	W0229 19:00:50.342894   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:50.342899   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:50.342948   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:50.385365   47919 cri.go:89] found id: ""
	I0229 19:00:50.385396   47919 logs.go:276] 0 containers: []
	W0229 19:00:50.385404   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:50.385410   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:50.385456   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:50.425212   47919 cri.go:89] found id: ""
	I0229 19:00:50.425238   47919 logs.go:276] 0 containers: []
	W0229 19:00:50.425256   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:50.425263   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:50.425321   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:50.465325   47919 cri.go:89] found id: ""
	I0229 19:00:50.465355   47919 logs.go:276] 0 containers: []
	W0229 19:00:50.465366   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:50.465382   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:50.465455   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:50.516256   47919 cri.go:89] found id: ""
	I0229 19:00:50.516282   47919 logs.go:276] 0 containers: []
	W0229 19:00:50.516291   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:50.516297   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:50.516355   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:50.562233   47919 cri.go:89] found id: ""
	I0229 19:00:50.562262   47919 logs.go:276] 0 containers: []
	W0229 19:00:50.562272   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:50.562280   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:50.562292   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:50.660311   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:50.660346   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:50.702790   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:50.702815   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:50.752085   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:50.752123   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:50.768346   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:50.768378   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:50.842567   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:53.343011   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:53.358002   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:53.358072   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:53.398397   47919 cri.go:89] found id: ""
	I0229 19:00:53.398424   47919 logs.go:276] 0 containers: []
	W0229 19:00:53.398433   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:53.398440   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:53.398501   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:53.437020   47919 cri.go:89] found id: ""
	I0229 19:00:53.437048   47919 logs.go:276] 0 containers: []
	W0229 19:00:53.437059   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:53.437067   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:53.437116   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:53.473350   47919 cri.go:89] found id: ""
	I0229 19:00:53.473377   47919 logs.go:276] 0 containers: []
	W0229 19:00:53.473388   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:53.473395   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:53.473454   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:53.525678   47919 cri.go:89] found id: ""
	I0229 19:00:53.525701   47919 logs.go:276] 0 containers: []
	W0229 19:00:53.525708   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:53.525716   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:53.525772   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:53.595411   47919 cri.go:89] found id: ""
	I0229 19:00:53.595437   47919 logs.go:276] 0 containers: []
	W0229 19:00:53.595448   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:53.595456   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:53.595518   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:53.635890   47919 cri.go:89] found id: ""
	I0229 19:00:53.635916   47919 logs.go:276] 0 containers: []
	W0229 19:00:53.635923   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:53.635929   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:53.635992   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:53.674966   47919 cri.go:89] found id: ""
	I0229 19:00:53.674992   47919 logs.go:276] 0 containers: []
	W0229 19:00:53.675000   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:53.675005   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:53.675076   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:53.713839   47919 cri.go:89] found id: ""
	I0229 19:00:53.713860   47919 logs.go:276] 0 containers: []
	W0229 19:00:53.713868   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:53.713882   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:53.713896   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:53.765185   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:53.765219   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:53.780830   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:53.780855   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:53.858528   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:53.858552   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:53.858567   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:53.936002   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:53.936034   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:52.085082   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:54.583306   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:55.184645   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:57.684780   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:55.015395   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:57.015941   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:59.017683   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:56.481406   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:56.498980   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:56.499059   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:56.557482   47919 cri.go:89] found id: ""
	I0229 19:00:56.557509   47919 logs.go:276] 0 containers: []
	W0229 19:00:56.557520   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:56.557528   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:56.557587   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:56.625912   47919 cri.go:89] found id: ""
	I0229 19:00:56.625941   47919 logs.go:276] 0 containers: []
	W0229 19:00:56.625952   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:56.625964   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:56.626023   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:56.663104   47919 cri.go:89] found id: ""
	I0229 19:00:56.663193   47919 logs.go:276] 0 containers: []
	W0229 19:00:56.663210   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:56.663217   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:56.663265   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:56.707473   47919 cri.go:89] found id: ""
	I0229 19:00:56.707494   47919 logs.go:276] 0 containers: []
	W0229 19:00:56.707502   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:56.707507   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:56.707564   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:56.752569   47919 cri.go:89] found id: ""
	I0229 19:00:56.752593   47919 logs.go:276] 0 containers: []
	W0229 19:00:56.752604   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:56.752611   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:56.752673   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:56.793618   47919 cri.go:89] found id: ""
	I0229 19:00:56.793660   47919 logs.go:276] 0 containers: []
	W0229 19:00:56.793672   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:56.793680   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:56.793741   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:56.833215   47919 cri.go:89] found id: ""
	I0229 19:00:56.833241   47919 logs.go:276] 0 containers: []
	W0229 19:00:56.833252   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:56.833259   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:56.833319   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:56.873162   47919 cri.go:89] found id: ""
	I0229 19:00:56.873187   47919 logs.go:276] 0 containers: []
	W0229 19:00:56.873195   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:56.873203   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:56.873219   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:56.887683   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:56.887707   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:56.957351   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:56.957369   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:56.957380   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:57.042415   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:57.042449   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:57.087636   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:57.087660   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:59.637662   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:59.652747   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:59.652815   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:59.692780   47919 cri.go:89] found id: ""
	I0229 19:00:59.692801   47919 logs.go:276] 0 containers: []
	W0229 19:00:59.692809   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:59.692814   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:59.692891   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:59.733445   47919 cri.go:89] found id: ""
	I0229 19:00:59.733474   47919 logs.go:276] 0 containers: []
	W0229 19:00:59.733482   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:59.733488   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:59.733535   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:59.769723   47919 cri.go:89] found id: ""
	I0229 19:00:59.769754   47919 logs.go:276] 0 containers: []
	W0229 19:00:59.769764   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:59.769770   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:59.769828   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:59.807810   47919 cri.go:89] found id: ""
	I0229 19:00:59.807837   47919 logs.go:276] 0 containers: []
	W0229 19:00:59.807848   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:59.807855   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:59.807916   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:59.849623   47919 cri.go:89] found id: ""
	I0229 19:00:59.849649   47919 logs.go:276] 0 containers: []
	W0229 19:00:59.849659   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:59.849666   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:59.849730   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:59.895593   47919 cri.go:89] found id: ""
	I0229 19:00:59.895620   47919 logs.go:276] 0 containers: []
	W0229 19:00:59.895631   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:59.895638   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:59.895698   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:59.935693   47919 cri.go:89] found id: ""
	I0229 19:00:59.935716   47919 logs.go:276] 0 containers: []
	W0229 19:00:59.935724   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:59.935729   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:59.935786   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:56.585093   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:59.083485   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:59.687672   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:02.184276   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:01.027786   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:03.514296   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:59.977655   47919 cri.go:89] found id: ""
	I0229 19:00:59.977685   47919 logs.go:276] 0 containers: []
	W0229 19:00:59.977693   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:59.977710   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:59.977725   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:59.992518   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:59.992545   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:00.075660   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:00.075679   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:00.075691   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:00.162338   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:00.162384   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:00.207000   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:00.207049   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:02.759942   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:02.776225   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:02.776293   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:02.812511   47919 cri.go:89] found id: ""
	I0229 19:01:02.812538   47919 logs.go:276] 0 containers: []
	W0229 19:01:02.812549   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:02.812556   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:02.812614   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:02.851417   47919 cri.go:89] found id: ""
	I0229 19:01:02.851448   47919 logs.go:276] 0 containers: []
	W0229 19:01:02.851467   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:02.851483   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:02.851560   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:02.894440   47919 cri.go:89] found id: ""
	I0229 19:01:02.894465   47919 logs.go:276] 0 containers: []
	W0229 19:01:02.894475   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:02.894487   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:02.894542   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:02.931046   47919 cri.go:89] found id: ""
	I0229 19:01:02.931075   47919 logs.go:276] 0 containers: []
	W0229 19:01:02.931084   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:02.931092   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:02.931150   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:02.971204   47919 cri.go:89] found id: ""
	I0229 19:01:02.971226   47919 logs.go:276] 0 containers: []
	W0229 19:01:02.971233   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:02.971238   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:02.971307   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:03.011695   47919 cri.go:89] found id: ""
	I0229 19:01:03.011723   47919 logs.go:276] 0 containers: []
	W0229 19:01:03.011734   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:03.011741   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:03.011796   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:03.054738   47919 cri.go:89] found id: ""
	I0229 19:01:03.054763   47919 logs.go:276] 0 containers: []
	W0229 19:01:03.054775   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:03.054782   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:03.054857   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:03.099242   47919 cri.go:89] found id: ""
	I0229 19:01:03.099267   47919 logs.go:276] 0 containers: []
	W0229 19:01:03.099278   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:03.099289   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:03.099303   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:03.148748   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:03.148778   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:03.164550   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:03.164578   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:03.241564   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:03.241586   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:03.241601   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:03.329350   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:03.329384   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:01.085890   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:03.582960   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:04.683846   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:06.684979   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:05.514444   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:08.014275   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:05.884415   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:05.901979   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:05.902044   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:05.946382   47919 cri.go:89] found id: ""
	I0229 19:01:05.946407   47919 logs.go:276] 0 containers: []
	W0229 19:01:05.946415   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:05.946421   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:05.946488   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:05.991783   47919 cri.go:89] found id: ""
	I0229 19:01:05.991807   47919 logs.go:276] 0 containers: []
	W0229 19:01:05.991816   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:05.991822   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:05.991879   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:06.034390   47919 cri.go:89] found id: ""
	I0229 19:01:06.034417   47919 logs.go:276] 0 containers: []
	W0229 19:01:06.034426   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:06.034431   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:06.034475   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:06.078417   47919 cri.go:89] found id: ""
	I0229 19:01:06.078445   47919 logs.go:276] 0 containers: []
	W0229 19:01:06.078456   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:06.078463   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:06.078527   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:06.119892   47919 cri.go:89] found id: ""
	I0229 19:01:06.119927   47919 logs.go:276] 0 containers: []
	W0229 19:01:06.119938   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:06.119952   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:06.120008   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:06.159308   47919 cri.go:89] found id: ""
	I0229 19:01:06.159332   47919 logs.go:276] 0 containers: []
	W0229 19:01:06.159339   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:06.159346   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:06.159410   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:06.208715   47919 cri.go:89] found id: ""
	I0229 19:01:06.208742   47919 logs.go:276] 0 containers: []
	W0229 19:01:06.208751   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:06.208756   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:06.208812   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:06.253831   47919 cri.go:89] found id: ""
	I0229 19:01:06.253858   47919 logs.go:276] 0 containers: []
	W0229 19:01:06.253866   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:06.253881   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:06.253895   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:06.315105   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:06.315141   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:06.349340   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:06.349386   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:06.431456   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:06.431477   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:06.431492   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:06.517754   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:06.517783   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:09.064267   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:09.078751   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:09.078822   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:09.130371   47919 cri.go:89] found id: ""
	I0229 19:01:09.130396   47919 logs.go:276] 0 containers: []
	W0229 19:01:09.130404   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:09.130410   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:09.130461   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:09.166312   47919 cri.go:89] found id: ""
	I0229 19:01:09.166340   47919 logs.go:276] 0 containers: []
	W0229 19:01:09.166351   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:09.166359   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:09.166415   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:09.202957   47919 cri.go:89] found id: ""
	I0229 19:01:09.202978   47919 logs.go:276] 0 containers: []
	W0229 19:01:09.202985   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:09.202991   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:09.203050   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:09.242350   47919 cri.go:89] found id: ""
	I0229 19:01:09.242380   47919 logs.go:276] 0 containers: []
	W0229 19:01:09.242391   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:09.242399   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:09.242455   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:09.300471   47919 cri.go:89] found id: ""
	I0229 19:01:09.300492   47919 logs.go:276] 0 containers: []
	W0229 19:01:09.300500   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:09.300505   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:09.300568   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:09.356861   47919 cri.go:89] found id: ""
	I0229 19:01:09.356886   47919 logs.go:276] 0 containers: []
	W0229 19:01:09.356893   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:09.356898   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:09.356965   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:09.411042   47919 cri.go:89] found id: ""
	I0229 19:01:09.411067   47919 logs.go:276] 0 containers: []
	W0229 19:01:09.411075   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:09.411080   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:09.411136   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:09.446312   47919 cri.go:89] found id: ""
	I0229 19:01:09.446336   47919 logs.go:276] 0 containers: []
	W0229 19:01:09.446347   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:09.446356   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:09.446367   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:09.492195   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:09.492227   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:09.541943   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:09.541973   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:09.557347   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:09.557373   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:09.635319   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:09.635363   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:09.635379   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:05.584255   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:08.082899   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:10.083808   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:09.189158   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:11.684731   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:10.513801   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:12.514492   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:12.224271   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:12.243330   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:12.243403   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:12.285525   47919 cri.go:89] found id: ""
	I0229 19:01:12.285547   47919 logs.go:276] 0 containers: []
	W0229 19:01:12.285556   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:12.285561   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:12.285617   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:12.347511   47919 cri.go:89] found id: ""
	I0229 19:01:12.347535   47919 logs.go:276] 0 containers: []
	W0229 19:01:12.347543   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:12.347548   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:12.347593   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:12.392145   47919 cri.go:89] found id: ""
	I0229 19:01:12.392207   47919 logs.go:276] 0 containers: []
	W0229 19:01:12.392231   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:12.392248   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:12.392366   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:12.430238   47919 cri.go:89] found id: ""
	I0229 19:01:12.430268   47919 logs.go:276] 0 containers: []
	W0229 19:01:12.430278   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:12.430286   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:12.430345   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:12.473019   47919 cri.go:89] found id: ""
	I0229 19:01:12.473054   47919 logs.go:276] 0 containers: []
	W0229 19:01:12.473065   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:12.473072   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:12.473131   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:12.510653   47919 cri.go:89] found id: ""
	I0229 19:01:12.510681   47919 logs.go:276] 0 containers: []
	W0229 19:01:12.510692   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:12.510699   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:12.510759   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:12.548137   47919 cri.go:89] found id: ""
	I0229 19:01:12.548163   47919 logs.go:276] 0 containers: []
	W0229 19:01:12.548171   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:12.548176   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:12.548232   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:12.588416   47919 cri.go:89] found id: ""
	I0229 19:01:12.588435   47919 logs.go:276] 0 containers: []
	W0229 19:01:12.588443   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:12.588452   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:12.588467   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:12.603651   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:12.603681   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:12.681060   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:12.681081   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:12.681094   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:12.764839   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:12.764870   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:12.807178   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:12.807202   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:12.583319   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:14.583681   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:14.184569   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:16.185919   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:14.514955   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:17.014358   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:19.016452   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:15.357205   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:15.382491   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:15.382571   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:15.422538   47919 cri.go:89] found id: ""
	I0229 19:01:15.422561   47919 logs.go:276] 0 containers: []
	W0229 19:01:15.422568   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:15.422577   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:15.422635   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:15.464564   47919 cri.go:89] found id: ""
	I0229 19:01:15.464593   47919 logs.go:276] 0 containers: []
	W0229 19:01:15.464601   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:15.464607   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:15.464662   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:15.502625   47919 cri.go:89] found id: ""
	I0229 19:01:15.502650   47919 logs.go:276] 0 containers: []
	W0229 19:01:15.502662   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:15.502669   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:15.502724   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:15.543187   47919 cri.go:89] found id: ""
	I0229 19:01:15.543215   47919 logs.go:276] 0 containers: []
	W0229 19:01:15.543229   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:15.543234   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:15.543283   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:15.585273   47919 cri.go:89] found id: ""
	I0229 19:01:15.585296   47919 logs.go:276] 0 containers: []
	W0229 19:01:15.585306   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:15.585314   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:15.585386   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:15.626180   47919 cri.go:89] found id: ""
	I0229 19:01:15.626208   47919 logs.go:276] 0 containers: []
	W0229 19:01:15.626219   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:15.626227   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:15.626288   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:15.670572   47919 cri.go:89] found id: ""
	I0229 19:01:15.670596   47919 logs.go:276] 0 containers: []
	W0229 19:01:15.670604   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:15.670610   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:15.670657   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:15.710549   47919 cri.go:89] found id: ""
	I0229 19:01:15.710587   47919 logs.go:276] 0 containers: []
	W0229 19:01:15.710595   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:15.710604   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:15.710618   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:15.765148   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:15.765180   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:15.780717   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:15.780742   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:15.852811   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:15.852835   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:15.852856   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:15.930728   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:15.930759   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:18.483798   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:18.497545   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:18.497611   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:18.540226   47919 cri.go:89] found id: ""
	I0229 19:01:18.540256   47919 logs.go:276] 0 containers: []
	W0229 19:01:18.540266   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:18.540274   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:18.540336   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:18.578106   47919 cri.go:89] found id: ""
	I0229 19:01:18.578124   47919 logs.go:276] 0 containers: []
	W0229 19:01:18.578134   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:18.578142   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:18.578192   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:18.617138   47919 cri.go:89] found id: ""
	I0229 19:01:18.617167   47919 logs.go:276] 0 containers: []
	W0229 19:01:18.617178   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:18.617185   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:18.617242   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:18.654667   47919 cri.go:89] found id: ""
	I0229 19:01:18.654762   47919 logs.go:276] 0 containers: []
	W0229 19:01:18.654779   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:18.654787   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:18.654845   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:18.695837   47919 cri.go:89] found id: ""
	I0229 19:01:18.695859   47919 logs.go:276] 0 containers: []
	W0229 19:01:18.695866   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:18.695875   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:18.695929   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:18.738178   47919 cri.go:89] found id: ""
	I0229 19:01:18.738199   47919 logs.go:276] 0 containers: []
	W0229 19:01:18.738206   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:18.738211   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:18.738259   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:18.777018   47919 cri.go:89] found id: ""
	I0229 19:01:18.777044   47919 logs.go:276] 0 containers: []
	W0229 19:01:18.777052   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:18.777058   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:18.777102   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:18.820701   47919 cri.go:89] found id: ""
	I0229 19:01:18.820723   47919 logs.go:276] 0 containers: []
	W0229 19:01:18.820734   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:18.820746   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:18.820762   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:18.907150   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:18.907182   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:18.950363   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:18.950393   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:18.999446   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:18.999479   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:19.020681   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:19.020714   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:19.139305   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:17.083357   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:19.087286   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:18.684811   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:20.684974   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:22.685289   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:21.513256   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:23.513492   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:21.640062   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:21.654739   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:21.654799   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:21.701885   47919 cri.go:89] found id: ""
	I0229 19:01:21.701912   47919 logs.go:276] 0 containers: []
	W0229 19:01:21.701921   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:21.701929   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:21.701987   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:21.746736   47919 cri.go:89] found id: ""
	I0229 19:01:21.746767   47919 logs.go:276] 0 containers: []
	W0229 19:01:21.746780   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:21.746787   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:21.746847   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:21.784830   47919 cri.go:89] found id: ""
	I0229 19:01:21.784851   47919 logs.go:276] 0 containers: []
	W0229 19:01:21.784859   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:21.784865   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:21.784911   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:21.824122   47919 cri.go:89] found id: ""
	I0229 19:01:21.824151   47919 logs.go:276] 0 containers: []
	W0229 19:01:21.824162   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:21.824171   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:21.824217   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:21.869937   47919 cri.go:89] found id: ""
	I0229 19:01:21.869967   47919 logs.go:276] 0 containers: []
	W0229 19:01:21.869979   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:21.869986   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:21.870043   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:21.909902   47919 cri.go:89] found id: ""
	I0229 19:01:21.909928   47919 logs.go:276] 0 containers: []
	W0229 19:01:21.909939   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:21.909946   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:21.910005   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:21.953980   47919 cri.go:89] found id: ""
	I0229 19:01:21.954021   47919 logs.go:276] 0 containers: []
	W0229 19:01:21.954033   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:21.954040   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:21.954108   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:21.997483   47919 cri.go:89] found id: ""
	I0229 19:01:21.997510   47919 logs.go:276] 0 containers: []
	W0229 19:01:21.997521   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:21.997531   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:21.997546   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:22.108610   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:22.108639   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:22.153571   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:22.153596   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:22.204525   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:22.204555   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:22.219217   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:22.219241   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:22.294794   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:24.795157   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:24.811292   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:24.811363   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:24.854354   47919 cri.go:89] found id: ""
	I0229 19:01:24.854387   47919 logs.go:276] 0 containers: []
	W0229 19:01:24.854396   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:24.854402   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:24.854455   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:24.890800   47919 cri.go:89] found id: ""
	I0229 19:01:24.890828   47919 logs.go:276] 0 containers: []
	W0229 19:01:24.890838   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:24.890844   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:24.890900   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:24.930961   47919 cri.go:89] found id: ""
	I0229 19:01:24.930983   47919 logs.go:276] 0 containers: []
	W0229 19:01:24.930991   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:24.931001   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:24.931073   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:21.582702   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:23.584665   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:25.185732   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:27.683784   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:25.513886   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:28.016852   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:24.968719   47919 cri.go:89] found id: ""
	I0229 19:01:24.968740   47919 logs.go:276] 0 containers: []
	W0229 19:01:24.968747   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:24.968752   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:24.968809   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:25.012723   47919 cri.go:89] found id: ""
	I0229 19:01:25.012746   47919 logs.go:276] 0 containers: []
	W0229 19:01:25.012756   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:25.012763   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:25.012821   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:25.064388   47919 cri.go:89] found id: ""
	I0229 19:01:25.064412   47919 logs.go:276] 0 containers: []
	W0229 19:01:25.064422   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:25.064435   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:25.064496   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:25.122256   47919 cri.go:89] found id: ""
	I0229 19:01:25.122277   47919 logs.go:276] 0 containers: []
	W0229 19:01:25.122286   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:25.122291   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:25.122335   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:25.165487   47919 cri.go:89] found id: ""
	I0229 19:01:25.165515   47919 logs.go:276] 0 containers: []
	W0229 19:01:25.165526   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:25.165536   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:25.165557   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:25.249294   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:25.249333   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:25.297013   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:25.297048   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:25.346276   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:25.346309   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:25.362604   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:25.362635   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:25.434586   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:27.935727   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:27.950680   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:27.950750   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:27.989253   47919 cri.go:89] found id: ""
	I0229 19:01:27.989282   47919 logs.go:276] 0 containers: []
	W0229 19:01:27.989293   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:27.989300   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:27.989357   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:28.039714   47919 cri.go:89] found id: ""
	I0229 19:01:28.039741   47919 logs.go:276] 0 containers: []
	W0229 19:01:28.039750   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:28.039763   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:28.039828   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:28.102860   47919 cri.go:89] found id: ""
	I0229 19:01:28.102886   47919 logs.go:276] 0 containers: []
	W0229 19:01:28.102897   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:28.102904   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:28.102971   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:28.160075   47919 cri.go:89] found id: ""
	I0229 19:01:28.160097   47919 logs.go:276] 0 containers: []
	W0229 19:01:28.160104   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:28.160110   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:28.160180   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:28.200297   47919 cri.go:89] found id: ""
	I0229 19:01:28.200317   47919 logs.go:276] 0 containers: []
	W0229 19:01:28.200325   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:28.200330   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:28.200393   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:28.239912   47919 cri.go:89] found id: ""
	I0229 19:01:28.239944   47919 logs.go:276] 0 containers: []
	W0229 19:01:28.239955   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:28.239963   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:28.240018   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:28.278525   47919 cri.go:89] found id: ""
	I0229 19:01:28.278550   47919 logs.go:276] 0 containers: []
	W0229 19:01:28.278558   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:28.278564   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:28.278617   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:28.315659   47919 cri.go:89] found id: ""
	I0229 19:01:28.315685   47919 logs.go:276] 0 containers: []
	W0229 19:01:28.315693   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:28.315703   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:28.315716   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:28.330102   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:28.330127   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:28.402474   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:28.402497   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:28.402513   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:28.486271   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:28.486308   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:28.531888   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:28.531918   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:26.083338   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:28.083983   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:30.085481   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:29.684229   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:32.184054   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:30.513642   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:32.514405   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:31.082385   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:31.122771   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:31.122844   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:31.165097   47919 cri.go:89] found id: ""
	I0229 19:01:31.165127   47919 logs.go:276] 0 containers: []
	W0229 19:01:31.165138   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:31.165148   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:31.165215   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:31.209449   47919 cri.go:89] found id: ""
	I0229 19:01:31.209482   47919 logs.go:276] 0 containers: []
	W0229 19:01:31.209492   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:31.209498   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:31.209559   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:31.249660   47919 cri.go:89] found id: ""
	I0229 19:01:31.249687   47919 logs.go:276] 0 containers: []
	W0229 19:01:31.249698   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:31.249705   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:31.249770   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:31.299268   47919 cri.go:89] found id: ""
	I0229 19:01:31.299292   47919 logs.go:276] 0 containers: []
	W0229 19:01:31.299301   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:31.299308   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:31.299363   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:31.339078   47919 cri.go:89] found id: ""
	I0229 19:01:31.339111   47919 logs.go:276] 0 containers: []
	W0229 19:01:31.339123   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:31.339131   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:31.339194   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:31.378548   47919 cri.go:89] found id: ""
	I0229 19:01:31.378576   47919 logs.go:276] 0 containers: []
	W0229 19:01:31.378587   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:31.378595   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:31.378654   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:31.418744   47919 cri.go:89] found id: ""
	I0229 19:01:31.418780   47919 logs.go:276] 0 containers: []
	W0229 19:01:31.418812   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:31.418824   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:31.418889   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:31.464078   47919 cri.go:89] found id: ""
	I0229 19:01:31.464103   47919 logs.go:276] 0 containers: []
	W0229 19:01:31.464113   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:31.464124   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:31.464138   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:31.516406   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:31.516434   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:31.531504   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:31.531527   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:31.607391   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:31.607413   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:31.607426   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:31.691582   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:31.691609   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:34.233205   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:34.250283   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:34.250345   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:34.294588   47919 cri.go:89] found id: ""
	I0229 19:01:34.294620   47919 logs.go:276] 0 containers: []
	W0229 19:01:34.294631   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:34.294639   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:34.294712   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:34.337033   47919 cri.go:89] found id: ""
	I0229 19:01:34.337061   47919 logs.go:276] 0 containers: []
	W0229 19:01:34.337071   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:34.337079   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:34.337141   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:34.382800   47919 cri.go:89] found id: ""
	I0229 19:01:34.382831   47919 logs.go:276] 0 containers: []
	W0229 19:01:34.382840   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:34.382845   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:34.382904   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:34.422931   47919 cri.go:89] found id: ""
	I0229 19:01:34.422959   47919 logs.go:276] 0 containers: []
	W0229 19:01:34.422970   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:34.422977   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:34.423059   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:34.469724   47919 cri.go:89] found id: ""
	I0229 19:01:34.469755   47919 logs.go:276] 0 containers: []
	W0229 19:01:34.469765   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:34.469773   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:34.469824   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:34.513428   47919 cri.go:89] found id: ""
	I0229 19:01:34.513461   47919 logs.go:276] 0 containers: []
	W0229 19:01:34.513472   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:34.513479   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:34.513555   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:34.552593   47919 cri.go:89] found id: ""
	I0229 19:01:34.552638   47919 logs.go:276] 0 containers: []
	W0229 19:01:34.552648   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:34.552655   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:34.552717   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:34.596516   47919 cri.go:89] found id: ""
	I0229 19:01:34.596538   47919 logs.go:276] 0 containers: []
	W0229 19:01:34.596546   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:34.596554   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:34.596568   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:34.611782   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:34.611805   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:34.694333   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:34.694352   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:34.694368   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:34.781638   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:34.781669   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:34.832910   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:34.832943   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:32.584363   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:34.585650   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:34.185025   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:36.683723   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:34.515185   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:37.013287   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:39.014417   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:37.398458   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:37.415617   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:37.415696   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:37.455390   47919 cri.go:89] found id: ""
	I0229 19:01:37.455421   47919 logs.go:276] 0 containers: []
	W0229 19:01:37.455433   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:37.455440   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:37.455501   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:37.498869   47919 cri.go:89] found id: ""
	I0229 19:01:37.498890   47919 logs.go:276] 0 containers: []
	W0229 19:01:37.498901   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:37.498909   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:37.498972   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:37.538928   47919 cri.go:89] found id: ""
	I0229 19:01:37.538952   47919 logs.go:276] 0 containers: []
	W0229 19:01:37.538960   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:37.538966   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:37.539012   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:37.577278   47919 cri.go:89] found id: ""
	I0229 19:01:37.577299   47919 logs.go:276] 0 containers: []
	W0229 19:01:37.577310   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:37.577317   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:37.577372   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:37.620313   47919 cri.go:89] found id: ""
	I0229 19:01:37.620342   47919 logs.go:276] 0 containers: []
	W0229 19:01:37.620352   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:37.620359   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:37.620420   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:37.657696   47919 cri.go:89] found id: ""
	I0229 19:01:37.657717   47919 logs.go:276] 0 containers: []
	W0229 19:01:37.657726   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:37.657734   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:37.657792   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:37.698814   47919 cri.go:89] found id: ""
	I0229 19:01:37.698833   47919 logs.go:276] 0 containers: []
	W0229 19:01:37.698841   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:37.698848   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:37.698902   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:37.736438   47919 cri.go:89] found id: ""
	I0229 19:01:37.736469   47919 logs.go:276] 0 containers: []
	W0229 19:01:37.736480   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:37.736490   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:37.736506   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:37.753849   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:37.753871   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:37.854740   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:37.854764   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:37.854783   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:37.943837   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:37.943872   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:37.988180   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:37.988209   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:37.084353   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:39.582760   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:39.183743   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:41.184218   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:41.014652   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:43.014745   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:40.543133   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:40.558453   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:40.558526   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:40.599794   47919 cri.go:89] found id: ""
	I0229 19:01:40.599814   47919 logs.go:276] 0 containers: []
	W0229 19:01:40.599821   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:40.599827   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:40.599874   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:40.641738   47919 cri.go:89] found id: ""
	I0229 19:01:40.641762   47919 logs.go:276] 0 containers: []
	W0229 19:01:40.641769   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:40.641775   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:40.641819   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:40.683905   47919 cri.go:89] found id: ""
	I0229 19:01:40.683935   47919 logs.go:276] 0 containers: []
	W0229 19:01:40.683945   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:40.683953   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:40.684006   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:40.727645   47919 cri.go:89] found id: ""
	I0229 19:01:40.727675   47919 logs.go:276] 0 containers: []
	W0229 19:01:40.727685   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:40.727693   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:40.727754   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:40.785142   47919 cri.go:89] found id: ""
	I0229 19:01:40.785172   47919 logs.go:276] 0 containers: []
	W0229 19:01:40.785192   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:40.785199   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:40.785252   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:40.854534   47919 cri.go:89] found id: ""
	I0229 19:01:40.854560   47919 logs.go:276] 0 containers: []
	W0229 19:01:40.854571   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:40.854580   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:40.854639   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:40.900823   47919 cri.go:89] found id: ""
	I0229 19:01:40.900851   47919 logs.go:276] 0 containers: []
	W0229 19:01:40.900862   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:40.900869   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:40.900928   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:40.938108   47919 cri.go:89] found id: ""
	I0229 19:01:40.938135   47919 logs.go:276] 0 containers: []
	W0229 19:01:40.938146   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:40.938156   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:40.938171   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:40.987452   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:40.987482   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:41.037388   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:41.037417   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:41.051987   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:41.052015   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:41.126077   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:41.126102   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:41.126116   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:43.715745   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:43.730683   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:43.730755   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:43.790637   47919 cri.go:89] found id: ""
	I0229 19:01:43.790665   47919 logs.go:276] 0 containers: []
	W0229 19:01:43.790676   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:43.790682   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:43.790731   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:43.848237   47919 cri.go:89] found id: ""
	I0229 19:01:43.848263   47919 logs.go:276] 0 containers: []
	W0229 19:01:43.848272   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:43.848277   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:43.848337   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:43.897892   47919 cri.go:89] found id: ""
	I0229 19:01:43.897920   47919 logs.go:276] 0 containers: []
	W0229 19:01:43.897928   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:43.897934   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:43.897989   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:43.936068   47919 cri.go:89] found id: ""
	I0229 19:01:43.936089   47919 logs.go:276] 0 containers: []
	W0229 19:01:43.936097   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:43.936102   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:43.936149   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:43.978636   47919 cri.go:89] found id: ""
	I0229 19:01:43.978670   47919 logs.go:276] 0 containers: []
	W0229 19:01:43.978682   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:43.978689   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:43.978751   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:44.018642   47919 cri.go:89] found id: ""
	I0229 19:01:44.018676   47919 logs.go:276] 0 containers: []
	W0229 19:01:44.018684   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:44.018690   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:44.018737   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:44.056237   47919 cri.go:89] found id: ""
	I0229 19:01:44.056267   47919 logs.go:276] 0 containers: []
	W0229 19:01:44.056278   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:44.056285   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:44.056347   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:44.095489   47919 cri.go:89] found id: ""
	I0229 19:01:44.095522   47919 logs.go:276] 0 containers: []
	W0229 19:01:44.095532   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:44.095543   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:44.095557   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:44.139407   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:44.139433   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:44.189893   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:44.189921   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:44.206426   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:44.206449   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:44.285594   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:44.285621   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:44.285638   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:41.584614   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:44.083599   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:43.185509   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:45.683851   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:47.684064   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:45.015082   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:47.017540   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:46.869271   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:46.885267   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:46.885356   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:46.921696   47919 cri.go:89] found id: ""
	I0229 19:01:46.921718   47919 logs.go:276] 0 containers: []
	W0229 19:01:46.921725   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:46.921731   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:46.921789   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:46.960265   47919 cri.go:89] found id: ""
	I0229 19:01:46.960291   47919 logs.go:276] 0 containers: []
	W0229 19:01:46.960302   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:46.960309   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:46.960367   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:46.998035   47919 cri.go:89] found id: ""
	I0229 19:01:46.998062   47919 logs.go:276] 0 containers: []
	W0229 19:01:46.998070   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:46.998075   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:46.998119   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:47.041563   47919 cri.go:89] found id: ""
	I0229 19:01:47.041586   47919 logs.go:276] 0 containers: []
	W0229 19:01:47.041595   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:47.041600   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:47.041643   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:47.084146   47919 cri.go:89] found id: ""
	I0229 19:01:47.084167   47919 logs.go:276] 0 containers: []
	W0229 19:01:47.084174   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:47.084179   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:47.084227   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:47.126813   47919 cri.go:89] found id: ""
	I0229 19:01:47.126835   47919 logs.go:276] 0 containers: []
	W0229 19:01:47.126845   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:47.126853   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:47.126909   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:47.165379   47919 cri.go:89] found id: ""
	I0229 19:01:47.165399   47919 logs.go:276] 0 containers: []
	W0229 19:01:47.165406   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:47.165412   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:47.165454   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:47.204263   47919 cri.go:89] found id: ""
	I0229 19:01:47.204306   47919 logs.go:276] 0 containers: []
	W0229 19:01:47.204316   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:47.204328   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:47.204345   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:47.248848   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:47.248876   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:47.299388   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:47.299416   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:47.314484   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:47.314507   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:47.386231   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:47.386256   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:47.386272   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:46.084527   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:48.085557   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:50.189188   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:52.684126   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:49.513497   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:51.514191   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:53.515909   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:49.965988   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:49.980621   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:49.980700   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:50.025010   47919 cri.go:89] found id: ""
	I0229 19:01:50.025030   47919 logs.go:276] 0 containers: []
	W0229 19:01:50.025037   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:50.025042   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:50.025090   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:50.066947   47919 cri.go:89] found id: ""
	I0229 19:01:50.066976   47919 logs.go:276] 0 containers: []
	W0229 19:01:50.066984   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:50.066990   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:50.067061   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:50.108892   47919 cri.go:89] found id: ""
	I0229 19:01:50.108913   47919 logs.go:276] 0 containers: []
	W0229 19:01:50.108931   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:50.108937   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:50.108997   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:50.149601   47919 cri.go:89] found id: ""
	I0229 19:01:50.149626   47919 logs.go:276] 0 containers: []
	W0229 19:01:50.149636   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:50.149643   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:50.149704   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:50.191881   47919 cri.go:89] found id: ""
	I0229 19:01:50.191908   47919 logs.go:276] 0 containers: []
	W0229 19:01:50.191918   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:50.191925   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:50.191987   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:50.233782   47919 cri.go:89] found id: ""
	I0229 19:01:50.233803   47919 logs.go:276] 0 containers: []
	W0229 19:01:50.233811   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:50.233816   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:50.233870   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:50.274913   47919 cri.go:89] found id: ""
	I0229 19:01:50.274941   47919 logs.go:276] 0 containers: []
	W0229 19:01:50.274950   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:50.274955   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:50.275050   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:50.321924   47919 cri.go:89] found id: ""
	I0229 19:01:50.321945   47919 logs.go:276] 0 containers: []
	W0229 19:01:50.321953   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:50.321967   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:50.321978   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:50.367357   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:50.367388   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:50.417229   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:50.417260   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:50.432031   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:50.432056   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:50.504920   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:50.504942   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:50.504960   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:53.110884   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:53.126947   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:53.127004   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:53.166940   47919 cri.go:89] found id: ""
	I0229 19:01:53.166965   47919 logs.go:276] 0 containers: []
	W0229 19:01:53.166975   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:53.166982   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:53.167054   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:53.205917   47919 cri.go:89] found id: ""
	I0229 19:01:53.205960   47919 logs.go:276] 0 containers: []
	W0229 19:01:53.205968   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:53.205974   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:53.206030   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:53.245547   47919 cri.go:89] found id: ""
	I0229 19:01:53.245577   47919 logs.go:276] 0 containers: []
	W0229 19:01:53.245587   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:53.245595   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:53.245654   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:53.287513   47919 cri.go:89] found id: ""
	I0229 19:01:53.287540   47919 logs.go:276] 0 containers: []
	W0229 19:01:53.287550   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:53.287557   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:53.287617   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:53.329269   47919 cri.go:89] found id: ""
	I0229 19:01:53.329299   47919 logs.go:276] 0 containers: []
	W0229 19:01:53.329310   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:53.329318   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:53.329379   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:53.377438   47919 cri.go:89] found id: ""
	I0229 19:01:53.377467   47919 logs.go:276] 0 containers: []
	W0229 19:01:53.377478   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:53.377485   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:53.377549   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:53.418414   47919 cri.go:89] found id: ""
	I0229 19:01:53.418440   47919 logs.go:276] 0 containers: []
	W0229 19:01:53.418448   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:53.418453   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:53.418514   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:53.458365   47919 cri.go:89] found id: ""
	I0229 19:01:53.458393   47919 logs.go:276] 0 containers: []
	W0229 19:01:53.458402   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:53.458409   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:53.458421   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:53.540710   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:53.540744   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:53.637271   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:53.637302   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:53.687822   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:53.687850   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:53.703482   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:53.703506   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:53.779564   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:50.584198   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:53.082170   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:55.082683   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:54.685554   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:56.685951   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:56.013441   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:58.016917   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:56.280300   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:56.295210   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:56.295295   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:56.336903   47919 cri.go:89] found id: ""
	I0229 19:01:56.336935   47919 logs.go:276] 0 containers: []
	W0229 19:01:56.336945   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:56.336953   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:56.337002   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:56.373300   47919 cri.go:89] found id: ""
	I0229 19:01:56.373322   47919 logs.go:276] 0 containers: []
	W0229 19:01:56.373330   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:56.373338   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:56.373390   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:56.411949   47919 cri.go:89] found id: ""
	I0229 19:01:56.411975   47919 logs.go:276] 0 containers: []
	W0229 19:01:56.411984   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:56.411990   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:56.412047   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:56.453302   47919 cri.go:89] found id: ""
	I0229 19:01:56.453329   47919 logs.go:276] 0 containers: []
	W0229 19:01:56.453339   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:56.453344   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:56.453403   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:56.490543   47919 cri.go:89] found id: ""
	I0229 19:01:56.490565   47919 logs.go:276] 0 containers: []
	W0229 19:01:56.490576   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:56.490582   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:56.490637   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:56.547078   47919 cri.go:89] found id: ""
	I0229 19:01:56.547101   47919 logs.go:276] 0 containers: []
	W0229 19:01:56.547108   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:56.547113   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:56.547171   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:56.598382   47919 cri.go:89] found id: ""
	I0229 19:01:56.598408   47919 logs.go:276] 0 containers: []
	W0229 19:01:56.598417   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:56.598424   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:56.598478   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:56.646090   47919 cri.go:89] found id: ""
	I0229 19:01:56.646117   47919 logs.go:276] 0 containers: []
	W0229 19:01:56.646125   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:56.646134   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:56.646145   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:56.691685   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:56.691711   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:56.742886   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:56.742927   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:56.758326   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:56.758350   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:56.830140   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:56.830160   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:56.830177   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:59.414437   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:59.429710   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:59.429793   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:59.473993   47919 cri.go:89] found id: ""
	I0229 19:01:59.474018   47919 logs.go:276] 0 containers: []
	W0229 19:01:59.474025   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:59.474031   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:59.474091   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:59.529114   47919 cri.go:89] found id: ""
	I0229 19:01:59.529143   47919 logs.go:276] 0 containers: []
	W0229 19:01:59.529157   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:59.529164   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:59.529222   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:59.596624   47919 cri.go:89] found id: ""
	I0229 19:01:59.596654   47919 logs.go:276] 0 containers: []
	W0229 19:01:59.596665   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:59.596672   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:59.596730   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:59.641088   47919 cri.go:89] found id: ""
	I0229 19:01:59.641118   47919 logs.go:276] 0 containers: []
	W0229 19:01:59.641130   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:59.641138   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:59.641198   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:59.682294   47919 cri.go:89] found id: ""
	I0229 19:01:59.682318   47919 logs.go:276] 0 containers: []
	W0229 19:01:59.682327   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:59.682333   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:59.682406   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:59.722881   47919 cri.go:89] found id: ""
	I0229 19:01:59.722902   47919 logs.go:276] 0 containers: []
	W0229 19:01:59.722910   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:59.722915   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:59.722982   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:59.761727   47919 cri.go:89] found id: ""
	I0229 19:01:59.761757   47919 logs.go:276] 0 containers: []
	W0229 19:01:59.761767   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:59.761778   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:59.761839   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:59.805733   47919 cri.go:89] found id: ""
	I0229 19:01:59.805762   47919 logs.go:276] 0 containers: []
	W0229 19:01:59.805772   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:59.805783   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:59.805798   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:59.883702   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:59.883721   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:59.883733   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:57.083166   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:59.085841   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:59.183892   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:01.184393   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:00.513790   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:03.013807   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:59.960649   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:59.960682   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:02:00.012085   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:02:00.012121   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:02:00.065794   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:02:00.065834   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:02:02.583319   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:02:02.603123   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:02:02.603178   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:02:02.654992   47919 cri.go:89] found id: ""
	I0229 19:02:02.655017   47919 logs.go:276] 0 containers: []
	W0229 19:02:02.655046   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:02:02.655053   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:02:02.655103   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:02:02.697067   47919 cri.go:89] found id: ""
	I0229 19:02:02.697098   47919 logs.go:276] 0 containers: []
	W0229 19:02:02.697109   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:02:02.697116   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:02:02.697178   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:02:02.734804   47919 cri.go:89] found id: ""
	I0229 19:02:02.734828   47919 logs.go:276] 0 containers: []
	W0229 19:02:02.734835   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:02:02.734841   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:02:02.734893   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:02:02.778292   47919 cri.go:89] found id: ""
	I0229 19:02:02.778313   47919 logs.go:276] 0 containers: []
	W0229 19:02:02.778321   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:02:02.778328   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:02:02.778382   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:02:02.819431   47919 cri.go:89] found id: ""
	I0229 19:02:02.819458   47919 logs.go:276] 0 containers: []
	W0229 19:02:02.819470   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:02:02.819478   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:02:02.819537   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:02:02.862409   47919 cri.go:89] found id: ""
	I0229 19:02:02.862432   47919 logs.go:276] 0 containers: []
	W0229 19:02:02.862439   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:02:02.862445   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:02:02.862487   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:02:02.902486   47919 cri.go:89] found id: ""
	I0229 19:02:02.902513   47919 logs.go:276] 0 containers: []
	W0229 19:02:02.902521   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:02:02.902526   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:02:02.902571   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:02:02.939408   47919 cri.go:89] found id: ""
	I0229 19:02:02.939436   47919 logs.go:276] 0 containers: []
	W0229 19:02:02.939443   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:02:02.939451   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:02:02.939462   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:02:02.954539   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:02:02.954564   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:02:03.032534   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:02:03.032556   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:02:03.032574   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:02:03.116064   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:02:03.116096   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:02:03.167242   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:02:03.167265   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:02:01.582557   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:03.583876   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:04.576948   47608 pod_ready.go:81] duration metric: took 4m0.00105469s waiting for pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace to be "Ready" ...
	E0229 19:02:04.576996   47608 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace to be "Ready" (will not retry!)
	I0229 19:02:04.577015   47608 pod_ready.go:38] duration metric: took 4m12.91384632s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 19:02:04.577039   47608 kubeadm.go:640] restartCluster took 4m30.900514081s
	W0229 19:02:04.577101   47608 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0229 19:02:04.577137   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0229 19:02:03.684074   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:05.686050   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:07.686409   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:05.014368   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:07.518556   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:05.718312   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:02:05.732879   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:02:05.733012   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:02:05.774525   47919 cri.go:89] found id: ""
	I0229 19:02:05.774557   47919 logs.go:276] 0 containers: []
	W0229 19:02:05.774569   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:02:05.774577   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:02:05.774640   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:02:05.817870   47919 cri.go:89] found id: ""
	I0229 19:02:05.817900   47919 logs.go:276] 0 containers: []
	W0229 19:02:05.817912   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:02:05.817919   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:02:05.817998   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:02:05.859533   47919 cri.go:89] found id: ""
	I0229 19:02:05.859565   47919 logs.go:276] 0 containers: []
	W0229 19:02:05.859579   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:02:05.859587   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:02:05.859646   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:02:05.904971   47919 cri.go:89] found id: ""
	I0229 19:02:05.905003   47919 logs.go:276] 0 containers: []
	W0229 19:02:05.905014   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:02:05.905021   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:02:05.905086   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:02:05.950431   47919 cri.go:89] found id: ""
	I0229 19:02:05.950459   47919 logs.go:276] 0 containers: []
	W0229 19:02:05.950470   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:02:05.950478   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:02:05.950546   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:02:05.999464   47919 cri.go:89] found id: ""
	I0229 19:02:05.999489   47919 logs.go:276] 0 containers: []
	W0229 19:02:05.999500   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:02:05.999508   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:02:05.999588   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:02:06.045086   47919 cri.go:89] found id: ""
	I0229 19:02:06.045117   47919 logs.go:276] 0 containers: []
	W0229 19:02:06.045133   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:02:06.045140   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:02:06.045203   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:02:06.091542   47919 cri.go:89] found id: ""
	I0229 19:02:06.091571   47919 logs.go:276] 0 containers: []
	W0229 19:02:06.091583   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:02:06.091592   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:02:06.091607   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:02:06.156524   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:02:06.156558   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:02:06.174941   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:02:06.174965   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:02:06.260443   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:02:06.260467   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:02:06.260483   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:02:06.377415   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:02:06.377457   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:02:08.931407   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:02:08.946035   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:02:08.946108   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:02:08.989299   47919 cri.go:89] found id: ""
	I0229 19:02:08.989326   47919 logs.go:276] 0 containers: []
	W0229 19:02:08.989338   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:02:08.989345   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:02:08.989405   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:02:09.033634   47919 cri.go:89] found id: ""
	I0229 19:02:09.033664   47919 logs.go:276] 0 containers: []
	W0229 19:02:09.033677   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:02:09.033684   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:02:09.033745   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:02:09.084381   47919 cri.go:89] found id: ""
	I0229 19:02:09.084406   47919 logs.go:276] 0 containers: []
	W0229 19:02:09.084435   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:02:09.084442   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:02:09.084507   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:02:09.132526   47919 cri.go:89] found id: ""
	I0229 19:02:09.132555   47919 logs.go:276] 0 containers: []
	W0229 19:02:09.132573   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:02:09.132581   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:02:09.132644   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:02:09.182655   47919 cri.go:89] found id: ""
	I0229 19:02:09.182684   47919 logs.go:276] 0 containers: []
	W0229 19:02:09.182694   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:02:09.182701   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:02:09.182764   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:02:09.223164   47919 cri.go:89] found id: ""
	I0229 19:02:09.223191   47919 logs.go:276] 0 containers: []
	W0229 19:02:09.223202   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:02:09.223210   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:02:09.223267   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:02:09.271882   47919 cri.go:89] found id: ""
	I0229 19:02:09.271908   47919 logs.go:276] 0 containers: []
	W0229 19:02:09.271926   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:02:09.271934   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:02:09.271992   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:02:09.331796   47919 cri.go:89] found id: ""
	I0229 19:02:09.331826   47919 logs.go:276] 0 containers: []
	W0229 19:02:09.331837   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:02:09.331847   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:02:09.331860   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:02:09.398969   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:02:09.399009   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:02:09.418992   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:02:09.419040   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:02:09.503358   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:02:09.503381   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:02:09.503394   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:02:09.612549   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:02:09.612586   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:02:10.184741   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:12.685204   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:10.024230   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:12.513343   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:12.162138   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:02:12.175827   47919 kubeadm.go:640] restartCluster took 4m14.562960798s
	W0229 19:02:12.175902   47919 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0229 19:02:12.175940   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0229 19:02:12.639231   47919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 19:02:12.658353   47919 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 19:02:12.671552   47919 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 19:02:12.684278   47919 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 19:02:12.684323   47919 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 19:02:12.903644   47919 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 19:02:15.184189   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:17.184275   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:14.517015   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:17.015195   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:19.184474   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:21.184737   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:19.513735   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:22.016650   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:23.185852   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:25.685744   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:24.516493   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:26.519091   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:29.013740   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:28.184960   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:30.685098   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:31.013808   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:33.514912   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:37.055439   47608 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.47828283s)
	I0229 19:02:37.055501   47608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 19:02:37.077296   47608 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 19:02:37.089984   47608 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 19:02:37.100332   47608 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 19:02:37.100379   47608 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0229 19:02:37.156153   47608 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0229 19:02:37.156243   47608 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 19:02:37.317040   47608 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 19:02:37.317142   47608 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 19:02:37.317220   47608 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 19:02:37.551800   47608 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 19:02:33.184422   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:35.686104   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:37.553918   47608 out.go:204]   - Generating certificates and keys ...
	I0229 19:02:37.554019   47608 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 19:02:37.554099   47608 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 19:02:37.554197   47608 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 19:02:37.554271   47608 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 19:02:37.554545   47608 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 19:02:37.555258   47608 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 19:02:37.555792   47608 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 19:02:37.556150   47608 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 19:02:37.556697   47608 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 19:02:37.557215   47608 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 19:02:37.557744   47608 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 19:02:37.557835   47608 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 19:02:37.725663   47608 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 19:02:37.801114   47608 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 19:02:37.971825   47608 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 19:02:38.081281   47608 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 19:02:38.081986   47608 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 19:02:38.086435   47608 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 19:02:36.013356   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:38.014838   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:38.088264   47608 out.go:204]   - Booting up control plane ...
	I0229 19:02:38.088353   47608 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 19:02:38.088442   47608 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 19:02:38.088533   47608 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 19:02:38.106686   47608 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 19:02:38.107606   47608 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 19:02:38.107671   47608 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0229 19:02:38.264387   47608 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 19:02:38.185682   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:40.684963   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:40.014933   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:42.016282   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:44.768315   47608 kubeadm.go:322] [apiclient] All control plane components are healthy after 6.503831 seconds
	I0229 19:02:44.768482   47608 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0229 19:02:44.786115   47608 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0229 19:02:45.321509   47608 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0229 19:02:45.321785   47608 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-991128 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0229 19:02:45.834905   47608 kubeadm.go:322] [bootstrap-token] Using token: 53x4pg.x71etkalcz6sdqmq
	I0229 19:02:45.836192   47608 out.go:204]   - Configuring RBAC rules ...
	I0229 19:02:45.836319   47608 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0229 19:02:45.843486   47608 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0229 19:02:45.854690   47608 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0229 19:02:45.866571   47608 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0229 19:02:45.870812   47608 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0229 19:02:45.874413   47608 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0229 19:02:45.891377   47608 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0229 19:02:46.190541   47608 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0229 19:02:46.251452   47608 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0229 19:02:46.254418   47608 kubeadm.go:322] 
	I0229 19:02:46.254529   47608 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0229 19:02:46.254552   47608 kubeadm.go:322] 
	I0229 19:02:46.254653   47608 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0229 19:02:46.254663   47608 kubeadm.go:322] 
	I0229 19:02:46.254693   47608 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0229 19:02:46.254777   47608 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0229 19:02:46.254843   47608 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0229 19:02:46.254856   47608 kubeadm.go:322] 
	I0229 19:02:46.254932   47608 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0229 19:02:46.254949   47608 kubeadm.go:322] 
	I0229 19:02:46.255010   47608 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0229 19:02:46.255035   47608 kubeadm.go:322] 
	I0229 19:02:46.255115   47608 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0229 19:02:46.255219   47608 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0229 19:02:46.255288   47608 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0229 19:02:46.255298   47608 kubeadm.go:322] 
	I0229 19:02:46.255366   47608 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0229 19:02:46.255456   47608 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0229 19:02:46.255469   47608 kubeadm.go:322] 
	I0229 19:02:46.255574   47608 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 53x4pg.x71etkalcz6sdqmq \
	I0229 19:02:46.255704   47608 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:aeecb2c9da879b7c90530698bac219eb9b0e3de8de59b6b65ba867f984e81782 \
	I0229 19:02:46.255726   47608 kubeadm.go:322] 	--control-plane 
	I0229 19:02:46.255730   47608 kubeadm.go:322] 
	I0229 19:02:46.255838   47608 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0229 19:02:46.255850   47608 kubeadm.go:322] 
	I0229 19:02:46.255951   47608 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 53x4pg.x71etkalcz6sdqmq \
	I0229 19:02:46.256097   47608 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:aeecb2c9da879b7c90530698bac219eb9b0e3de8de59b6b65ba867f984e81782 
	I0229 19:02:46.261669   47608 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 19:02:46.264240   47608 cni.go:84] Creating CNI manager for ""
	I0229 19:02:46.264255   47608 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 19:02:46.266874   47608 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 19:02:43.185008   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:45.685480   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:44.515334   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:47.014269   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:48.006787   48088 pod_ready.go:81] duration metric: took 4m0.000159724s waiting for pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace to be "Ready" ...
	E0229 19:02:48.006810   48088 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace to be "Ready" (will not retry!)
	I0229 19:02:48.006828   48088 pod_ready.go:38] duration metric: took 4m13.055720974s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 19:02:48.006852   48088 kubeadm.go:640] restartCluster took 4m30.764284147s
	W0229 19:02:48.006932   48088 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0229 19:02:48.006958   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0229 19:02:46.268155   47608 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 19:02:46.302630   47608 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 19:02:46.363238   47608 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 19:02:46.363314   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:46.363332   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f9e09574fdbf0719156a1e892f7aeb8b71f0cf19 minikube.k8s.io/name=embed-certs-991128 minikube.k8s.io/updated_at=2024_02_29T19_02_46_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:46.429324   47608 ops.go:34] apiserver oom_adj: -16
	I0229 19:02:46.736245   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:47.236707   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:47.736427   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:48.236379   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:48.736599   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:49.236640   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:49.736492   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:50.237145   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:48.184252   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:50.185542   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:52.683769   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:50.736510   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:51.236643   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:51.736840   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:52.236378   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:52.736992   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:53.236672   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:53.736958   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:54.236590   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:54.736323   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:55.237218   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:55.184845   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:57.685255   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:55.736774   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:56.236342   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:56.736380   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:57.236930   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:57.737100   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:58.237031   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:58.387963   47608 kubeadm.go:1088] duration metric: took 12.024710189s to wait for elevateKubeSystemPrivileges.
	I0229 19:02:58.388004   47608 kubeadm.go:406] StartCluster complete in 5m24.764885945s
	I0229 19:02:58.388027   47608 settings.go:142] acquiring lock: {Name:mk2120f70b8c0f8e9d58905a579415af500b3723 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:02:58.388120   47608 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18259-6428/kubeconfig
	I0229 19:02:58.390675   47608 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/kubeconfig: {Name:mk7125f243525b7f0feb85371d9c568ed8e0cf7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:02:58.390953   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 19:02:58.391045   47608 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 19:02:58.391123   47608 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-991128"
	I0229 19:02:58.391146   47608 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-991128"
	W0229 19:02:58.391154   47608 addons.go:243] addon storage-provisioner should already be in state true
	I0229 19:02:58.391154   47608 config.go:182] Loaded profile config "embed-certs-991128": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 19:02:58.391203   47608 host.go:66] Checking if "embed-certs-991128" exists ...
	I0229 19:02:58.391204   47608 addons.go:69] Setting default-storageclass=true in profile "embed-certs-991128"
	I0229 19:02:58.391244   47608 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-991128"
	I0229 19:02:58.391596   47608 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:02:58.391624   47608 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:02:58.391698   47608 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:02:58.391718   47608 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:02:58.391204   47608 addons.go:69] Setting metrics-server=true in profile "embed-certs-991128"
	I0229 19:02:58.391948   47608 addons.go:234] Setting addon metrics-server=true in "embed-certs-991128"
	W0229 19:02:58.391957   47608 addons.go:243] addon metrics-server should already be in state true
	I0229 19:02:58.391993   47608 host.go:66] Checking if "embed-certs-991128" exists ...
	I0229 19:02:58.392356   47608 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:02:58.392387   47608 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:02:58.409953   47608 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40911
	I0229 19:02:58.409972   47608 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34647
	I0229 19:02:58.410460   47608 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:02:58.410478   47608 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:02:58.411005   47608 main.go:141] libmachine: Using API Version  1
	I0229 19:02:58.411018   47608 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:02:58.411018   47608 main.go:141] libmachine: Using API Version  1
	I0229 19:02:58.411048   47608 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:02:58.411360   47608 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41753
	I0229 19:02:58.411529   47608 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:02:58.411534   47608 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:02:58.411740   47608 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:02:58.411752   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetState
	I0229 19:02:58.412075   47608 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:02:58.412114   47608 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:02:58.412144   47608 main.go:141] libmachine: Using API Version  1
	I0229 19:02:58.412164   47608 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:02:58.412662   47608 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:02:58.413148   47608 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:02:58.413178   47608 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:02:58.415173   47608 addons.go:234] Setting addon default-storageclass=true in "embed-certs-991128"
	W0229 19:02:58.415195   47608 addons.go:243] addon default-storageclass should already be in state true
	I0229 19:02:58.415222   47608 host.go:66] Checking if "embed-certs-991128" exists ...
	I0229 19:02:58.415608   47608 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:02:58.415638   47608 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:02:58.429891   47608 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42057
	I0229 19:02:58.430108   47608 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37441
	I0229 19:02:58.430343   47608 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:02:58.430782   47608 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:02:58.431278   47608 main.go:141] libmachine: Using API Version  1
	I0229 19:02:58.431299   47608 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:02:58.431355   47608 main.go:141] libmachine: Using API Version  1
	I0229 19:02:58.431369   47608 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:02:58.431662   47608 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:02:58.431720   47608 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:02:58.432048   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetState
	I0229 19:02:58.432471   47608 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:02:58.432497   47608 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:02:58.432570   47608 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46057
	I0229 19:02:58.432926   47608 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:02:58.433593   47608 main.go:141] libmachine: Using API Version  1
	I0229 19:02:58.433611   47608 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:02:58.433700   47608 main.go:141] libmachine: (embed-certs-991128) Calling .DriverName
	I0229 19:02:58.436201   47608 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0229 19:02:58.434375   47608 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:02:58.437531   47608 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0229 19:02:58.437549   47608 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0229 19:02:58.437568   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHHostname
	I0229 19:02:58.436414   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetState
	I0229 19:02:58.440191   47608 main.go:141] libmachine: (embed-certs-991128) Calling .DriverName
	I0229 19:02:58.441799   47608 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 19:02:58.440820   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 19:02:58.441382   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHPort
	I0229 19:02:58.443189   47608 main.go:141] libmachine: (embed-certs-991128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:76:e2", ip: ""} in network mk-embed-certs-991128: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:29 +0000 UTC Type:0 Mac:52:54:00:44:76:e2 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:embed-certs-991128 Clientid:01:52:54:00:44:76:e2}
	I0229 19:02:58.443204   47608 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 19:02:58.443216   47608 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 19:02:58.443228   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHHostname
	I0229 19:02:58.443226   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined IP address 192.168.61.34 and MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 19:02:58.443288   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHKeyPath
	I0229 19:02:58.443399   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHUsername
	I0229 19:02:58.443538   47608 sshutil.go:53] new ssh client: &{IP:192.168.61.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/embed-certs-991128/id_rsa Username:docker}
	I0229 19:02:58.446253   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 19:02:58.446573   47608 main.go:141] libmachine: (embed-certs-991128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:76:e2", ip: ""} in network mk-embed-certs-991128: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:29 +0000 UTC Type:0 Mac:52:54:00:44:76:e2 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:embed-certs-991128 Clientid:01:52:54:00:44:76:e2}
	I0229 19:02:58.446840   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined IP address 192.168.61.34 and MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 19:02:58.446885   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHPort
	I0229 19:02:58.447103   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHKeyPath
	I0229 19:02:58.447250   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHUsername
	I0229 19:02:58.447399   47608 sshutil.go:53] new ssh client: &{IP:192.168.61.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/embed-certs-991128/id_rsa Username:docker}
	I0229 19:02:58.449854   47608 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41629
	I0229 19:02:58.450308   47608 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:02:58.450842   47608 main.go:141] libmachine: Using API Version  1
	I0229 19:02:58.450862   47608 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:02:58.451215   47608 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:02:58.452123   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetState
	I0229 19:02:58.453574   47608 main.go:141] libmachine: (embed-certs-991128) Calling .DriverName
	I0229 19:02:58.453805   47608 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 19:02:58.453822   47608 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 19:02:58.453836   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHHostname
	I0229 19:02:58.456718   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 19:02:58.457141   47608 main.go:141] libmachine: (embed-certs-991128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:76:e2", ip: ""} in network mk-embed-certs-991128: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:29 +0000 UTC Type:0 Mac:52:54:00:44:76:e2 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:embed-certs-991128 Clientid:01:52:54:00:44:76:e2}
	I0229 19:02:58.457198   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined IP address 192.168.61.34 and MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 19:02:58.457301   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHPort
	I0229 19:02:58.457891   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHKeyPath
	I0229 19:02:58.458055   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHUsername
	I0229 19:02:58.458208   47608 sshutil.go:53] new ssh client: &{IP:192.168.61.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/embed-certs-991128/id_rsa Username:docker}
	I0229 19:02:58.622646   47608 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 19:02:58.666581   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0229 19:02:58.680294   47608 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0229 19:02:58.680319   47608 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0229 19:02:58.701182   47608 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 19:02:58.826426   47608 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0229 19:02:58.826454   47608 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0229 19:02:58.896074   47608 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-991128" context rescaled to 1 replicas
	I0229 19:02:58.896112   47608 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.34 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0229 19:02:58.897987   47608 out.go:177] * Verifying Kubernetes components...
	I0229 19:02:58.899307   47608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 19:02:58.943695   47608 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 19:02:58.943719   47608 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0229 19:02:59.111473   47608 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 19:03:00.514730   47608 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.892048484s)
	I0229 19:03:00.514786   47608 main.go:141] libmachine: Making call to close driver server
	I0229 19:03:00.514797   47608 main.go:141] libmachine: (embed-certs-991128) Calling .Close
	I0229 19:03:00.515119   47608 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:03:00.515140   47608 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:03:00.515155   47608 main.go:141] libmachine: Making call to close driver server
	I0229 19:03:00.515151   47608 main.go:141] libmachine: (embed-certs-991128) DBG | Closing plugin on server side
	I0229 19:03:00.515163   47608 main.go:141] libmachine: (embed-certs-991128) Calling .Close
	I0229 19:03:00.515407   47608 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:03:00.515422   47608 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:03:00.525724   47608 main.go:141] libmachine: Making call to close driver server
	I0229 19:03:00.525747   47608 main.go:141] libmachine: (embed-certs-991128) Calling .Close
	I0229 19:03:00.526016   47608 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:03:00.526034   47608 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:03:00.526058   47608 main.go:141] libmachine: (embed-certs-991128) DBG | Closing plugin on server side
	I0229 19:03:00.549463   47608 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.882844212s)
	I0229 19:03:00.549496   47608 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0229 19:03:01.032296   47608 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.331073482s)
	I0229 19:03:01.032299   47608 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.132962021s)
	I0229 19:03:01.032378   47608 node_ready.go:35] waiting up to 6m0s for node "embed-certs-991128" to be "Ready" ...
	I0229 19:03:01.032351   47608 main.go:141] libmachine: Making call to close driver server
	I0229 19:03:01.032449   47608 main.go:141] libmachine: (embed-certs-991128) Calling .Close
	I0229 19:03:01.032776   47608 main.go:141] libmachine: (embed-certs-991128) DBG | Closing plugin on server side
	I0229 19:03:01.032863   47608 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:03:01.032884   47608 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:03:01.032912   47608 main.go:141] libmachine: Making call to close driver server
	I0229 19:03:01.032929   47608 main.go:141] libmachine: (embed-certs-991128) Calling .Close
	I0229 19:03:01.033250   47608 main.go:141] libmachine: (embed-certs-991128) DBG | Closing plugin on server side
	I0229 19:03:01.033294   47608 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:03:01.033313   47608 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:03:01.054533   47608 node_ready.go:49] node "embed-certs-991128" has status "Ready":"True"
	I0229 19:03:01.054561   47608 node_ready.go:38] duration metric: took 22.162376ms waiting for node "embed-certs-991128" to be "Ready" ...
	I0229 19:03:01.054574   47608 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 19:03:01.073737   47608 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.962221621s)
	I0229 19:03:01.073792   47608 main.go:141] libmachine: Making call to close driver server
	I0229 19:03:01.073807   47608 main.go:141] libmachine: (embed-certs-991128) Calling .Close
	I0229 19:03:01.074112   47608 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:03:01.074134   47608 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:03:01.074144   47608 main.go:141] libmachine: Making call to close driver server
	I0229 19:03:01.074152   47608 main.go:141] libmachine: (embed-certs-991128) Calling .Close
	I0229 19:03:01.074378   47608 main.go:141] libmachine: (embed-certs-991128) DBG | Closing plugin on server side
	I0229 19:03:01.074414   47608 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:03:01.074423   47608 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:03:01.074438   47608 addons.go:470] Verifying addon metrics-server=true in "embed-certs-991128"
	I0229 19:03:01.076668   47608 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0229 19:03:00.186003   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:02.684214   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:01.077896   47608 addons.go:505] enable addons completed in 2.686848059s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0229 19:03:01.090039   47608 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-nth8z" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:01.101161   47608 pod_ready.go:92] pod "coredns-5dd5756b68-nth8z" in "kube-system" namespace has status "Ready":"True"
	I0229 19:03:01.101188   47608 pod_ready.go:81] duration metric: took 11.117889ms waiting for pod "coredns-5dd5756b68-nth8z" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:01.101200   47608 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-991128" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:01.106035   47608 pod_ready.go:92] pod "etcd-embed-certs-991128" in "kube-system" namespace has status "Ready":"True"
	I0229 19:03:01.106059   47608 pod_ready.go:81] duration metric: took 4.853039ms waiting for pod "etcd-embed-certs-991128" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:01.106069   47608 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-991128" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:01.112716   47608 pod_ready.go:92] pod "kube-apiserver-embed-certs-991128" in "kube-system" namespace has status "Ready":"True"
	I0229 19:03:01.112741   47608 pod_ready.go:81] duration metric: took 6.663364ms waiting for pod "kube-apiserver-embed-certs-991128" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:01.112753   47608 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-991128" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:01.117682   47608 pod_ready.go:92] pod "kube-controller-manager-embed-certs-991128" in "kube-system" namespace has status "Ready":"True"
	I0229 19:03:01.117712   47608 pod_ready.go:81] duration metric: took 4.950508ms waiting for pod "kube-controller-manager-embed-certs-991128" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:01.117723   47608 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5grst" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:01.449759   47608 pod_ready.go:92] pod "kube-proxy-5grst" in "kube-system" namespace has status "Ready":"True"
	I0229 19:03:01.449780   47608 pod_ready.go:81] duration metric: took 332.0508ms waiting for pod "kube-proxy-5grst" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:01.449789   47608 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-991128" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:01.837609   47608 pod_ready.go:92] pod "kube-scheduler-embed-certs-991128" in "kube-system" namespace has status "Ready":"True"
	I0229 19:03:01.837633   47608 pod_ready.go:81] duration metric: took 387.837788ms waiting for pod "kube-scheduler-embed-certs-991128" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:01.837641   47608 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:03.844755   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:05.183456   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:07.184892   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:05.844890   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:07.845609   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:09.185625   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:11.683928   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:10.345767   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:12.346373   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:14.844773   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:13.684321   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:16.184064   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:16.845609   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:19.346873   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:18.185564   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:20.685386   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:20.199795   48088 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.19281949s)
	I0229 19:03:20.199858   48088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 19:03:20.217490   48088 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 19:03:20.230760   48088 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 19:03:20.243524   48088 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 19:03:20.243561   48088 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0229 19:03:20.456117   48088 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 19:03:21.845081   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:23.845701   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:23.184306   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:25.185094   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:25.677354   47515 pod_ready.go:81] duration metric: took 4m0.000327645s waiting for pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace to be "Ready" ...
	E0229 19:03:25.677385   47515 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace to be "Ready" (will not retry!)
	I0229 19:03:25.677415   47515 pod_ready.go:38] duration metric: took 4m14.05174509s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 19:03:25.677440   47515 kubeadm.go:640] restartCluster took 4m31.88709285s
	W0229 19:03:25.677495   47515 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0229 19:03:25.677520   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0229 19:03:29.090699   48088 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0229 19:03:29.090795   48088 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 19:03:29.090912   48088 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 19:03:29.091058   48088 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 19:03:29.091185   48088 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 19:03:29.091273   48088 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 19:03:29.092712   48088 out.go:204]   - Generating certificates and keys ...
	I0229 19:03:29.092825   48088 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 19:03:29.092914   48088 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 19:03:29.093021   48088 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 19:03:29.093110   48088 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 19:03:29.093199   48088 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 19:03:29.093273   48088 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 19:03:29.093353   48088 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 19:03:29.093430   48088 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 19:03:29.093523   48088 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 19:03:29.093617   48088 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 19:03:29.093668   48088 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 19:03:29.093741   48088 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 19:03:29.093811   48088 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 19:03:29.093880   48088 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 19:03:29.093962   48088 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 19:03:29.094031   48088 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 19:03:29.094133   48088 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 19:03:29.094211   48088 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 19:03:29.095825   48088 out.go:204]   - Booting up control plane ...
	I0229 19:03:29.095939   48088 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 19:03:29.096048   48088 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 19:03:29.096154   48088 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 19:03:29.096322   48088 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 19:03:29.096423   48088 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 19:03:29.096489   48088 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0229 19:03:29.096694   48088 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 19:03:29.096769   48088 kubeadm.go:322] [apiclient] All control plane components are healthy after 6.003591 seconds
	I0229 19:03:29.096853   48088 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0229 19:03:29.096951   48088 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0229 19:03:29.097006   48088 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0229 19:03:29.097158   48088 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-153528 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0229 19:03:29.097202   48088 kubeadm.go:322] [bootstrap-token] Using token: 1l0lv4.q8mu3aeamo8e3253
	I0229 19:03:29.098693   48088 out.go:204]   - Configuring RBAC rules ...
	I0229 19:03:29.098829   48088 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0229 19:03:29.098945   48088 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0229 19:03:29.099166   48088 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0229 19:03:29.099357   48088 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0229 19:03:29.099502   48088 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0229 19:03:29.099613   48088 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0229 19:03:29.099756   48088 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0229 19:03:29.099816   48088 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0229 19:03:29.099874   48088 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0229 19:03:29.099884   48088 kubeadm.go:322] 
	I0229 19:03:29.099961   48088 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0229 19:03:29.099970   48088 kubeadm.go:322] 
	I0229 19:03:29.100060   48088 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0229 19:03:29.100070   48088 kubeadm.go:322] 
	I0229 19:03:29.100100   48088 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0229 19:03:29.100173   48088 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0229 19:03:29.100239   48088 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0229 19:03:29.100252   48088 kubeadm.go:322] 
	I0229 19:03:29.100319   48088 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0229 19:03:29.100329   48088 kubeadm.go:322] 
	I0229 19:03:29.100388   48088 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0229 19:03:29.100398   48088 kubeadm.go:322] 
	I0229 19:03:29.100463   48088 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0229 19:03:29.100559   48088 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0229 19:03:29.100651   48088 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0229 19:03:29.100661   48088 kubeadm.go:322] 
	I0229 19:03:29.100763   48088 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0229 19:03:29.100862   48088 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0229 19:03:29.100877   48088 kubeadm.go:322] 
	I0229 19:03:29.100984   48088 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token 1l0lv4.q8mu3aeamo8e3253 \
	I0229 19:03:29.101114   48088 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:aeecb2c9da879b7c90530698bac219eb9b0e3de8de59b6b65ba867f984e81782 \
	I0229 19:03:29.101143   48088 kubeadm.go:322] 	--control-plane 
	I0229 19:03:29.101152   48088 kubeadm.go:322] 
	I0229 19:03:29.101249   48088 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0229 19:03:29.101258   48088 kubeadm.go:322] 
	I0229 19:03:29.101351   48088 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token 1l0lv4.q8mu3aeamo8e3253 \
	I0229 19:03:29.101473   48088 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:aeecb2c9da879b7c90530698bac219eb9b0e3de8de59b6b65ba867f984e81782 
	I0229 19:03:29.101488   48088 cni.go:84] Creating CNI manager for ""
	I0229 19:03:29.101497   48088 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 19:03:29.103073   48088 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 19:03:29.104219   48088 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 19:03:29.170952   48088 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 19:03:29.239084   48088 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 19:03:29.239154   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:29.239173   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f9e09574fdbf0719156a1e892f7aeb8b71f0cf19 minikube.k8s.io/name=default-k8s-diff-port-153528 minikube.k8s.io/updated_at=2024_02_29T19_03_29_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:25.847505   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:28.346494   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:29.423784   48088 ops.go:34] apiserver oom_adj: -16
	I0229 19:03:29.641150   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:30.141394   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:30.641982   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:31.141220   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:31.642229   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:32.141232   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:32.641372   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:33.141757   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:33.641285   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:34.141462   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:30.346615   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:32.844207   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:34.846669   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:34.641857   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:35.142068   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:35.641289   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:36.142146   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:36.641965   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:37.141335   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:37.641778   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:38.141415   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:38.641267   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:39.141162   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:36.846708   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:39.347339   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:39.642154   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:40.141271   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:40.641433   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:41.141522   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:41.641353   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:41.787617   48088 kubeadm.go:1088] duration metric: took 12.548525295s to wait for elevateKubeSystemPrivileges.
	I0229 19:03:41.787657   48088 kubeadm.go:406] StartCluster complete in 5m24.60631624s
	I0229 19:03:41.787678   48088 settings.go:142] acquiring lock: {Name:mk2120f70b8c0f8e9d58905a579415af500b3723 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:03:41.787771   48088 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18259-6428/kubeconfig
	I0229 19:03:41.789341   48088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/kubeconfig: {Name:mk7125f243525b7f0feb85371d9c568ed8e0cf7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:03:41.789617   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 19:03:41.789716   48088 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 19:03:41.789815   48088 config.go:182] Loaded profile config "default-k8s-diff-port-153528": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 19:03:41.789835   48088 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-153528"
	I0229 19:03:41.789835   48088 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-153528"
	I0229 19:03:41.789856   48088 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-153528"
	I0229 19:03:41.789821   48088 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-153528"
	I0229 19:03:41.789879   48088 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-153528"
	W0229 19:03:41.789890   48088 addons.go:243] addon storage-provisioner should already be in state true
	I0229 19:03:41.789937   48088 host.go:66] Checking if "default-k8s-diff-port-153528" exists ...
	I0229 19:03:41.789861   48088 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-153528"
	W0229 19:03:41.789963   48088 addons.go:243] addon metrics-server should already be in state true
	I0229 19:03:41.790008   48088 host.go:66] Checking if "default-k8s-diff-port-153528" exists ...
	I0229 19:03:41.790304   48088 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:03:41.790312   48088 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:03:41.790332   48088 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:03:41.790338   48088 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:03:41.790374   48088 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:03:41.790417   48088 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:03:41.806924   48088 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36439
	I0229 19:03:41.807115   48088 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38789
	I0229 19:03:41.807481   48088 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:03:41.807671   48088 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:03:41.808017   48088 main.go:141] libmachine: Using API Version  1
	I0229 19:03:41.808036   48088 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:03:41.808178   48088 main.go:141] libmachine: Using API Version  1
	I0229 19:03:41.808194   48088 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:03:41.808251   48088 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45501
	I0229 19:03:41.808377   48088 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:03:41.808613   48088 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:03:41.808953   48088 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:03:41.808999   48088 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:03:41.809113   48088 main.go:141] libmachine: Using API Version  1
	I0229 19:03:41.809136   48088 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:03:41.809418   48088 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:03:41.809604   48088 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:03:41.809789   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetState
	I0229 19:03:41.810683   48088 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:03:41.810718   48088 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:03:41.813030   48088 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-153528"
	W0229 19:03:41.813045   48088 addons.go:243] addon default-storageclass should already be in state true
	I0229 19:03:41.813066   48088 host.go:66] Checking if "default-k8s-diff-port-153528" exists ...
	I0229 19:03:41.813309   48088 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:03:41.813321   48088 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:03:41.824373   48088 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33803
	I0229 19:03:41.824768   48088 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:03:41.825263   48088 main.go:141] libmachine: Using API Version  1
	I0229 19:03:41.825280   48088 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:03:41.825557   48088 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:03:41.825699   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetState
	I0229 19:03:41.827334   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .DriverName
	I0229 19:03:41.828844   48088 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0229 19:03:41.829931   48088 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0229 19:03:41.829943   48088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0229 19:03:41.829968   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHHostname
	I0229 19:03:41.833079   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 19:03:41.833090   48088 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37709
	I0229 19:03:41.833451   48088 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:03:41.833516   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ec:2b", ip: ""} in network mk-default-k8s-diff-port-153528: {Iface:virbr3 ExpiryTime:2024-02-29 19:58:02 +0000 UTC Type:0 Mac:52:54:00:78:ec:2b Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:default-k8s-diff-port-153528 Clientid:01:52:54:00:78:ec:2b}
	I0229 19:03:41.833527   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined IP address 192.168.39.210 and MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 19:03:41.833694   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHPort
	I0229 19:03:41.833895   48088 main.go:141] libmachine: Using API Version  1
	I0229 19:03:41.833913   48088 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37833
	I0229 19:03:41.833917   48088 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:03:41.833982   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHKeyPath
	I0229 19:03:41.834140   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHUsername
	I0229 19:03:41.834272   48088 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/default-k8s-diff-port-153528/id_rsa Username:docker}
	I0229 19:03:41.834795   48088 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:03:41.835272   48088 main.go:141] libmachine: Using API Version  1
	I0229 19:03:41.835293   48088 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:03:41.835298   48088 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:03:41.835675   48088 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:03:41.835791   48088 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:03:41.835798   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetState
	I0229 19:03:41.835827   48088 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:03:41.837394   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .DriverName
	I0229 19:03:41.839349   48088 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 19:03:41.840971   48088 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 19:03:41.840992   48088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 19:03:41.841008   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHHostname
	I0229 19:03:41.844091   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 19:03:41.844475   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ec:2b", ip: ""} in network mk-default-k8s-diff-port-153528: {Iface:virbr3 ExpiryTime:2024-02-29 19:58:02 +0000 UTC Type:0 Mac:52:54:00:78:ec:2b Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:default-k8s-diff-port-153528 Clientid:01:52:54:00:78:ec:2b}
	I0229 19:03:41.844505   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined IP address 192.168.39.210 and MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 19:03:41.844735   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHPort
	I0229 19:03:41.844954   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHKeyPath
	I0229 19:03:41.845143   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHUsername
	I0229 19:03:41.845300   48088 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/default-k8s-diff-port-153528/id_rsa Username:docker}
	I0229 19:03:41.853524   48088 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45631
	I0229 19:03:41.855329   48088 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:03:41.855788   48088 main.go:141] libmachine: Using API Version  1
	I0229 19:03:41.855809   48088 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:03:41.856135   48088 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:03:41.856317   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetState
	I0229 19:03:41.857882   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .DriverName
	I0229 19:03:41.858179   48088 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 19:03:41.858193   48088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 19:03:41.858214   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHHostname
	I0229 19:03:41.861292   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 19:03:41.861640   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ec:2b", ip: ""} in network mk-default-k8s-diff-port-153528: {Iface:virbr3 ExpiryTime:2024-02-29 19:58:02 +0000 UTC Type:0 Mac:52:54:00:78:ec:2b Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:default-k8s-diff-port-153528 Clientid:01:52:54:00:78:ec:2b}
	I0229 19:03:41.861664   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined IP address 192.168.39.210 and MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 19:03:41.861899   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHPort
	I0229 19:03:41.862088   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHKeyPath
	I0229 19:03:41.862241   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHUsername
	I0229 19:03:41.862413   48088 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/default-k8s-diff-port-153528/id_rsa Username:docker}
	I0229 19:03:42.162741   48088 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0229 19:03:42.162760   48088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0229 19:03:42.164559   48088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 19:03:42.185784   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0229 19:03:42.225413   48088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 19:03:42.283759   48088 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0229 19:03:42.283792   48088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0229 19:03:42.296879   48088 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-153528" context rescaled to 1 replicas
	I0229 19:03:42.296912   48088 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.210 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0229 19:03:42.298687   48088 out.go:177] * Verifying Kubernetes components...
	I0229 19:03:42.300011   48088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 19:03:42.478347   48088 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 19:03:42.478370   48088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0229 19:03:42.626185   48088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 19:03:44.654846   48088 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.469026575s)
	I0229 19:03:44.654876   48088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.429431888s)
	I0229 19:03:44.654891   48088 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0229 19:03:44.654927   48088 main.go:141] libmachine: Making call to close driver server
	I0229 19:03:44.654937   48088 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.354896537s)
	I0229 19:03:44.654987   48088 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-153528" to be "Ready" ...
	I0229 19:03:44.654942   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .Close
	I0229 19:03:44.655090   48088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.490505268s)
	I0229 19:03:44.655115   48088 main.go:141] libmachine: Making call to close driver server
	I0229 19:03:44.655125   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .Close
	I0229 19:03:44.655326   48088 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:03:44.655344   48088 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:03:44.655346   48088 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:03:44.655345   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | Closing plugin on server side
	I0229 19:03:44.655354   48088 main.go:141] libmachine: Making call to close driver server
	I0229 19:03:44.655357   48088 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:03:44.655363   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .Close
	I0229 19:03:44.655370   48088 main.go:141] libmachine: Making call to close driver server
	I0229 19:03:44.655379   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .Close
	I0229 19:03:44.655562   48088 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:03:44.655604   48088 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:03:44.655579   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | Closing plugin on server side
	I0229 19:03:44.655662   48088 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:03:44.655821   48088 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:03:44.655659   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | Closing plugin on server side
	I0229 19:03:44.659331   48088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.033110492s)
	I0229 19:03:44.659381   48088 main.go:141] libmachine: Making call to close driver server
	I0229 19:03:44.659393   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .Close
	I0229 19:03:44.659652   48088 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:03:44.659667   48088 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:03:44.659675   48088 main.go:141] libmachine: Making call to close driver server
	I0229 19:03:44.659683   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .Close
	I0229 19:03:44.659685   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | Closing plugin on server side
	I0229 19:03:44.659902   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | Closing plugin on server side
	I0229 19:03:44.659939   48088 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:03:44.659950   48088 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:03:44.659960   48088 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-153528"
	I0229 19:03:44.683397   48088 node_ready.go:49] node "default-k8s-diff-port-153528" has status "Ready":"True"
	I0229 19:03:44.683417   48088 node_ready.go:38] duration metric: took 28.415374ms waiting for node "default-k8s-diff-port-153528" to be "Ready" ...
	I0229 19:03:44.683427   48088 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 19:03:44.685811   48088 main.go:141] libmachine: Making call to close driver server
	I0229 19:03:44.685831   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .Close
	I0229 19:03:44.686088   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | Closing plugin on server side
	I0229 19:03:44.686110   48088 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:03:44.686122   48088 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:03:44.687970   48088 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0229 19:03:41.849469   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:44.345593   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:44.689232   48088 addons.go:505] enable addons completed in 2.899518009s: enabled=[storage-provisioner metrics-server default-storageclass]
	I0229 19:03:44.693381   48088 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-cgvkv" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:44.720914   48088 pod_ready.go:92] pod "coredns-5dd5756b68-cgvkv" in "kube-system" namespace has status "Ready":"True"
	I0229 19:03:44.720942   48088 pod_ready.go:81] duration metric: took 27.53714ms waiting for pod "coredns-5dd5756b68-cgvkv" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:44.720954   48088 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-fmptg" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:44.729596   48088 pod_ready.go:92] pod "coredns-5dd5756b68-fmptg" in "kube-system" namespace has status "Ready":"True"
	I0229 19:03:44.729618   48088 pod_ready.go:81] duration metric: took 8.655818ms waiting for pod "coredns-5dd5756b68-fmptg" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:44.729628   48088 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-153528" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:44.734112   48088 pod_ready.go:92] pod "etcd-default-k8s-diff-port-153528" in "kube-system" namespace has status "Ready":"True"
	I0229 19:03:44.734130   48088 pod_ready.go:81] duration metric: took 4.494255ms waiting for pod "etcd-default-k8s-diff-port-153528" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:44.734137   48088 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-153528" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:44.738843   48088 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-153528" in "kube-system" namespace has status "Ready":"True"
	I0229 19:03:44.738860   48088 pod_ready.go:81] duration metric: took 4.717537ms waiting for pod "kube-apiserver-default-k8s-diff-port-153528" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:44.738868   48088 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-153528" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:45.059153   48088 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-153528" in "kube-system" namespace has status "Ready":"True"
	I0229 19:03:45.059174   48088 pod_ready.go:81] duration metric: took 320.300485ms waiting for pod "kube-controller-manager-default-k8s-diff-port-153528" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:45.059183   48088 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bvrxx" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:45.465590   48088 pod_ready.go:92] pod "kube-proxy-bvrxx" in "kube-system" namespace has status "Ready":"True"
	I0229 19:03:45.465616   48088 pod_ready.go:81] duration metric: took 406.426237ms waiting for pod "kube-proxy-bvrxx" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:45.465630   48088 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-153528" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:45.858390   48088 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-153528" in "kube-system" namespace has status "Ready":"True"
	I0229 19:03:45.858413   48088 pod_ready.go:81] duration metric: took 392.775547ms waiting for pod "kube-scheduler-default-k8s-diff-port-153528" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:45.858426   48088 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:47.866057   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:46.848336   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:49.344899   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:49.866128   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:51.871764   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:51.346608   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:53.846506   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:58.394324   47515 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.716776929s)
	I0229 19:03:58.394415   47515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 19:03:58.411946   47515 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 19:03:58.422778   47515 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 19:03:58.432981   47515 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 19:03:58.433029   47515 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0229 19:03:58.497643   47515 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.2
	I0229 19:03:58.497784   47515 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 19:03:58.673058   47515 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 19:03:58.673181   47515 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 19:03:58.673291   47515 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 19:03:58.915681   47515 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 19:03:54.366316   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:56.866740   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:58.867746   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:58.917365   47515 out.go:204]   - Generating certificates and keys ...
	I0229 19:03:58.917468   47515 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 19:03:58.917556   47515 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 19:03:58.917657   47515 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 19:03:58.917758   47515 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 19:03:58.917857   47515 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 19:03:58.917933   47515 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 19:03:58.918117   47515 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 19:03:58.918699   47515 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 19:03:58.919679   47515 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 19:03:58.920578   47515 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 19:03:58.921424   47515 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 19:03:58.921738   47515 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 19:03:59.066887   47515 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 19:03:59.215266   47515 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0229 19:03:59.404270   47515 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 19:03:59.514467   47515 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 19:03:59.615483   47515 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 19:03:59.616256   47515 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 19:03:59.619177   47515 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 19:03:55.850264   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:58.346720   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:59.620798   47515 out.go:204]   - Booting up control plane ...
	I0229 19:03:59.620910   47515 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 19:03:59.621009   47515 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 19:03:59.621758   47515 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 19:03:59.648331   47515 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 19:03:59.649070   47515 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 19:03:59.649141   47515 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0229 19:03:59.796018   47515 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 19:04:00.868393   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:03.366167   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:00.848016   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:03.347491   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:05.801078   47515 kubeadm.go:322] [apiclient] All control plane components are healthy after 6.003292 seconds
	I0229 19:04:05.820231   47515 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0229 19:04:05.842846   47515 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0229 19:04:06.388308   47515 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0229 19:04:06.388598   47515 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-247197 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0229 19:04:06.905903   47515 kubeadm.go:322] [bootstrap-token] Using token: 42vs85.s8nvx0pxc27k9bgo
	I0229 19:04:06.907650   47515 out.go:204]   - Configuring RBAC rules ...
	I0229 19:04:06.907813   47515 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0229 19:04:06.913716   47515 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0229 19:04:06.925730   47515 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0229 19:04:06.929319   47515 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0229 19:04:06.933110   47515 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0229 19:04:06.938550   47515 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0229 19:04:06.956559   47515 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0229 19:04:07.216913   47515 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0229 19:04:07.320534   47515 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0229 19:04:07.321455   47515 kubeadm.go:322] 
	I0229 19:04:07.321548   47515 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0229 19:04:07.321578   47515 kubeadm.go:322] 
	I0229 19:04:07.321696   47515 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0229 19:04:07.321710   47515 kubeadm.go:322] 
	I0229 19:04:07.321752   47515 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0229 19:04:07.321848   47515 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0229 19:04:07.321914   47515 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0229 19:04:07.321929   47515 kubeadm.go:322] 
	I0229 19:04:07.322021   47515 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0229 19:04:07.322032   47515 kubeadm.go:322] 
	I0229 19:04:07.322099   47515 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0229 19:04:07.322111   47515 kubeadm.go:322] 
	I0229 19:04:07.322182   47515 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0229 19:04:07.322304   47515 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0229 19:04:07.322404   47515 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0229 19:04:07.322416   47515 kubeadm.go:322] 
	I0229 19:04:07.322559   47515 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0229 19:04:07.322679   47515 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0229 19:04:07.322704   47515 kubeadm.go:322] 
	I0229 19:04:07.322808   47515 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 42vs85.s8nvx0pxc27k9bgo \
	I0229 19:04:07.322922   47515 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:aeecb2c9da879b7c90530698bac219eb9b0e3de8de59b6b65ba867f984e81782 \
	I0229 19:04:07.322956   47515 kubeadm.go:322] 	--control-plane 
	I0229 19:04:07.322964   47515 kubeadm.go:322] 
	I0229 19:04:07.323090   47515 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0229 19:04:07.323103   47515 kubeadm.go:322] 
	I0229 19:04:07.323230   47515 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 42vs85.s8nvx0pxc27k9bgo \
	I0229 19:04:07.323408   47515 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:aeecb2c9da879b7c90530698bac219eb9b0e3de8de59b6b65ba867f984e81782 
	I0229 19:04:07.323921   47515 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 19:04:07.323961   47515 cni.go:84] Creating CNI manager for ""
	I0229 19:04:07.323975   47515 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 19:04:07.325925   47515 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 19:04:07.327319   47515 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 19:04:07.387016   47515 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 19:04:07.434438   47515 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 19:04:07.434538   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:07.434554   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f9e09574fdbf0719156a1e892f7aeb8b71f0cf19 minikube.k8s.io/name=no-preload-247197 minikube.k8s.io/updated_at=2024_02_29T19_04_07_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:07.752182   47515 ops.go:34] apiserver oom_adj: -16
	I0229 19:04:07.752320   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:08.955017   47919 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 19:04:08.955134   47919 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 19:04:08.956493   47919 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 19:04:08.956586   47919 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 19:04:08.956684   47919 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 19:04:08.956809   47919 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 19:04:08.956955   47919 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 19:04:08.957116   47919 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 19:04:08.957253   47919 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 19:04:08.957304   47919 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 19:04:08.957375   47919 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 19:04:08.959231   47919 out.go:204]   - Generating certificates and keys ...
	I0229 19:04:08.959317   47919 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 19:04:08.959429   47919 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 19:04:08.959550   47919 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 19:04:08.959637   47919 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 19:04:08.959745   47919 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 19:04:08.959792   47919 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 19:04:08.959851   47919 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 19:04:08.959934   47919 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 19:04:08.960022   47919 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 19:04:08.960099   47919 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 19:04:08.960159   47919 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 19:04:08.960227   47919 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 19:04:08.960303   47919 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 19:04:08.960349   47919 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 19:04:08.960403   47919 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 19:04:08.960462   47919 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 19:04:08.960540   47919 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 19:04:05.369713   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:07.871542   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:08.962078   47919 out.go:204]   - Booting up control plane ...
	I0229 19:04:08.962181   47919 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 19:04:08.962279   47919 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 19:04:08.962361   47919 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 19:04:08.962470   47919 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 19:04:08.962646   47919 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 19:04:08.962689   47919 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 19:04:08.962777   47919 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 19:04:08.962968   47919 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 19:04:08.963056   47919 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 19:04:08.963331   47919 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 19:04:08.963436   47919 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 19:04:08.963646   47919 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 19:04:08.963761   47919 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 19:04:08.963949   47919 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 19:04:08.964053   47919 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 19:04:08.964273   47919 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 19:04:08.964281   47919 kubeadm.go:322] 
	I0229 19:04:08.964313   47919 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 19:04:08.964351   47919 kubeadm.go:322] 	timed out waiting for the condition
	I0229 19:04:08.964358   47919 kubeadm.go:322] 
	I0229 19:04:08.964385   47919 kubeadm.go:322] This error is likely caused by:
	I0229 19:04:08.964441   47919 kubeadm.go:322] 	- The kubelet is not running
	I0229 19:04:08.964547   47919 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 19:04:08.964560   47919 kubeadm.go:322] 
	I0229 19:04:08.964684   47919 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 19:04:08.964734   47919 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 19:04:08.964780   47919 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 19:04:08.964789   47919 kubeadm.go:322] 
	I0229 19:04:08.964922   47919 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 19:04:08.965053   47919 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 19:04:08.965180   47919 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 19:04:08.965255   47919 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 19:04:08.965342   47919 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 19:04:08.965438   47919 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	W0229 19:04:08.965475   47919 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0229 19:04:08.965520   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0229 19:04:09.441915   47919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 19:04:09.459807   47919 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 19:04:09.471061   47919 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 19:04:09.471099   47919 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 19:04:09.532830   47919 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 19:04:09.532979   47919 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 19:04:09.673720   47919 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 19:04:09.673884   47919 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 19:04:09.674071   47919 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 19:04:09.905201   47919 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 19:04:09.906612   47919 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 19:04:09.915393   47919 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 19:04:10.035443   47919 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 19:04:05.845532   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:07.846899   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:09.847708   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:10.037103   47919 out.go:204]   - Generating certificates and keys ...
	I0229 19:04:10.037203   47919 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 19:04:10.037335   47919 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 19:04:10.037453   47919 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 19:04:10.037558   47919 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 19:04:10.037689   47919 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 19:04:10.037832   47919 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 19:04:10.038465   47919 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 19:04:10.038932   47919 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 19:04:10.039471   47919 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 19:04:10.039874   47919 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 19:04:10.039961   47919 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 19:04:10.040045   47919 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 19:04:10.157741   47919 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 19:04:10.426271   47919 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 19:04:10.528768   47919 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 19:04:10.595099   47919 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 19:04:10.596020   47919 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 19:04:08.252779   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:08.753332   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:09.252867   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:09.752631   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:10.253281   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:10.753138   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:11.253104   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:11.752894   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:12.253271   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:12.753046   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:10.367912   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:12.870689   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:10.597781   47919 out.go:204]   - Booting up control plane ...
	I0229 19:04:10.597872   47919 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 19:04:10.602307   47919 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 19:04:10.603371   47919 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 19:04:10.604660   47919 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 19:04:10.607876   47919 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 19:04:12.346304   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:14.346555   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:13.252668   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:13.752660   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:14.252803   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:14.752360   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:15.252343   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:15.752568   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:16.252484   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:16.752977   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:17.253148   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:17.753112   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:15.366706   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:17.867839   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:18.253109   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:18.753221   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:19.253179   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:19.752851   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:19.875013   47515 kubeadm.go:1088] duration metric: took 12.44055176s to wait for elevateKubeSystemPrivileges.
	I0229 19:04:19.875056   47515 kubeadm.go:406] StartCluster complete in 5m26.137187745s
	I0229 19:04:19.875078   47515 settings.go:142] acquiring lock: {Name:mk2120f70b8c0f8e9d58905a579415af500b3723 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:04:19.875156   47515 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18259-6428/kubeconfig
	I0229 19:04:19.876716   47515 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/kubeconfig: {Name:mk7125f243525b7f0feb85371d9c568ed8e0cf7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:04:19.876957   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 19:04:19.877116   47515 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 19:04:19.877196   47515 addons.go:69] Setting storage-provisioner=true in profile "no-preload-247197"
	I0229 19:04:19.877207   47515 config.go:182] Loaded profile config "no-preload-247197": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0229 19:04:19.877222   47515 addons.go:69] Setting metrics-server=true in profile "no-preload-247197"
	I0229 19:04:19.877208   47515 addons.go:69] Setting default-storageclass=true in profile "no-preload-247197"
	I0229 19:04:19.877269   47515 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-247197"
	I0229 19:04:19.877213   47515 addons.go:234] Setting addon storage-provisioner=true in "no-preload-247197"
	W0229 19:04:19.877372   47515 addons.go:243] addon storage-provisioner should already be in state true
	I0229 19:04:19.877412   47515 host.go:66] Checking if "no-preload-247197" exists ...
	I0229 19:04:19.877244   47515 addons.go:234] Setting addon metrics-server=true in "no-preload-247197"
	W0229 19:04:19.877465   47515 addons.go:243] addon metrics-server should already be in state true
	I0229 19:04:19.877519   47515 host.go:66] Checking if "no-preload-247197" exists ...
	I0229 19:04:19.877697   47515 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:04:19.877734   47515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:04:19.877787   47515 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:04:19.877822   47515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:04:19.877875   47515 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:04:19.877905   47515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:04:19.895578   47515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37865
	I0229 19:04:19.896005   47515 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:04:19.896491   47515 main.go:141] libmachine: Using API Version  1
	I0229 19:04:19.896516   47515 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:04:19.897033   47515 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:04:19.897628   47515 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:04:19.897677   47515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:04:19.897705   47515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35601
	I0229 19:04:19.897711   47515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37627
	I0229 19:04:19.898072   47515 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:04:19.898171   47515 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:04:19.898512   47515 main.go:141] libmachine: Using API Version  1
	I0229 19:04:19.898533   47515 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:04:19.898653   47515 main.go:141] libmachine: Using API Version  1
	I0229 19:04:19.898674   47515 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:04:19.898854   47515 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:04:19.899002   47515 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:04:19.899159   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetState
	I0229 19:04:19.899386   47515 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:04:19.899433   47515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:04:19.902917   47515 addons.go:234] Setting addon default-storageclass=true in "no-preload-247197"
	W0229 19:04:19.902937   47515 addons.go:243] addon default-storageclass should already be in state true
	I0229 19:04:19.902965   47515 host.go:66] Checking if "no-preload-247197" exists ...
	I0229 19:04:19.903374   47515 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:04:19.903492   47515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:04:19.915592   47515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45771
	I0229 19:04:19.916152   47515 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:04:19.916347   47515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46249
	I0229 19:04:19.916677   47515 main.go:141] libmachine: Using API Version  1
	I0229 19:04:19.916694   47515 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:04:19.916799   47515 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:04:19.917168   47515 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:04:19.917302   47515 main.go:141] libmachine: Using API Version  1
	I0229 19:04:19.917314   47515 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:04:19.917505   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetState
	I0229 19:04:19.918075   47515 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:04:19.918253   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetState
	I0229 19:04:19.918351   47515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39265
	I0229 19:04:19.918773   47515 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:04:19.919153   47515 main.go:141] libmachine: Using API Version  1
	I0229 19:04:19.919170   47515 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:04:19.919631   47515 main.go:141] libmachine: (no-preload-247197) Calling .DriverName
	I0229 19:04:19.919999   47515 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:04:19.922165   47515 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0229 19:04:19.920215   47515 main.go:141] libmachine: (no-preload-247197) Calling .DriverName
	I0229 19:04:19.920473   47515 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:04:19.923441   47515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:04:19.923454   47515 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0229 19:04:19.923466   47515 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0229 19:04:19.923481   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHHostname
	I0229 19:04:19.924990   47515 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 19:04:16.845870   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:18.845928   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:19.926366   47515 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 19:04:19.926372   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 19:04:19.926384   47515 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 19:04:19.926402   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHHostname
	I0229 19:04:19.926728   47515 main.go:141] libmachine: (no-preload-247197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2f:53", ip: ""} in network mk-no-preload-247197: {Iface:virbr2 ExpiryTime:2024-02-29 19:58:22 +0000 UTC Type:0 Mac:52:54:00:2c:2f:53 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:no-preload-247197 Clientid:01:52:54:00:2c:2f:53}
	I0229 19:04:19.926752   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined IP address 192.168.50.72 and MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 19:04:19.926908   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHPort
	I0229 19:04:19.927072   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHKeyPath
	I0229 19:04:19.927216   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHUsername
	I0229 19:04:19.927357   47515 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/no-preload-247197/id_rsa Username:docker}
	I0229 19:04:19.929366   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 19:04:19.929709   47515 main.go:141] libmachine: (no-preload-247197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2f:53", ip: ""} in network mk-no-preload-247197: {Iface:virbr2 ExpiryTime:2024-02-29 19:58:22 +0000 UTC Type:0 Mac:52:54:00:2c:2f:53 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:no-preload-247197 Clientid:01:52:54:00:2c:2f:53}
	I0229 19:04:19.929728   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined IP address 192.168.50.72 and MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 19:04:19.929855   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHPort
	I0229 19:04:19.930000   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHKeyPath
	I0229 19:04:19.930090   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHUsername
	I0229 19:04:19.930171   47515 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/no-preload-247197/id_rsa Username:docker}
	I0229 19:04:19.940292   47515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45171
	I0229 19:04:19.940856   47515 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:04:19.941327   47515 main.go:141] libmachine: Using API Version  1
	I0229 19:04:19.941354   47515 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:04:19.941647   47515 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:04:19.941817   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetState
	I0229 19:04:19.943378   47515 main.go:141] libmachine: (no-preload-247197) Calling .DriverName
	I0229 19:04:19.943608   47515 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 19:04:19.943624   47515 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 19:04:19.943640   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHHostname
	I0229 19:04:19.946715   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 19:04:19.947112   47515 main.go:141] libmachine: (no-preload-247197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2f:53", ip: ""} in network mk-no-preload-247197: {Iface:virbr2 ExpiryTime:2024-02-29 19:58:22 +0000 UTC Type:0 Mac:52:54:00:2c:2f:53 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:no-preload-247197 Clientid:01:52:54:00:2c:2f:53}
	I0229 19:04:19.947132   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined IP address 192.168.50.72 and MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 19:04:19.947413   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHPort
	I0229 19:04:19.947546   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHKeyPath
	I0229 19:04:19.947672   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHUsername
	I0229 19:04:19.947795   47515 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/no-preload-247197/id_rsa Username:docker}
	I0229 19:04:20.159078   47515 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 19:04:20.246059   47515 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0229 19:04:20.246085   47515 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0229 19:04:20.338238   47515 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0229 19:04:20.338261   47515 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0229 19:04:20.365954   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0229 19:04:20.383186   47515 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-247197" context rescaled to 1 replicas
	I0229 19:04:20.383231   47515 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.72 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0229 19:04:20.385225   47515 out.go:177] * Verifying Kubernetes components...
	I0229 19:04:20.386616   47515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 19:04:20.395136   47515 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 19:04:20.442555   47515 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 19:04:20.442575   47515 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0229 19:04:20.584731   47515 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 19:04:21.931286   47515 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.772173305s)
	I0229 19:04:21.931338   47515 main.go:141] libmachine: Making call to close driver server
	I0229 19:04:21.931350   47515 main.go:141] libmachine: (no-preload-247197) Calling .Close
	I0229 19:04:21.931346   47515 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.565356284s)
	I0229 19:04:21.931374   47515 start.go:929] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0229 19:04:21.931413   47515 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.544778173s)
	I0229 19:04:21.931439   47515 node_ready.go:35] waiting up to 6m0s for node "no-preload-247197" to be "Ready" ...
	I0229 19:04:21.931456   47515 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.536286802s)
	I0229 19:04:21.931484   47515 main.go:141] libmachine: Making call to close driver server
	I0229 19:04:21.931493   47515 main.go:141] libmachine: (no-preload-247197) Calling .Close
	I0229 19:04:21.932214   47515 main.go:141] libmachine: (no-preload-247197) DBG | Closing plugin on server side
	I0229 19:04:21.932216   47515 main.go:141] libmachine: (no-preload-247197) DBG | Closing plugin on server side
	I0229 19:04:21.932230   47515 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:04:21.932243   47515 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:04:21.932252   47515 main.go:141] libmachine: Making call to close driver server
	I0229 19:04:21.932269   47515 main.go:141] libmachine: (no-preload-247197) Calling .Close
	I0229 19:04:21.932251   47515 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:04:21.932321   47515 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:04:21.932330   47515 main.go:141] libmachine: Making call to close driver server
	I0229 19:04:21.932340   47515 main.go:141] libmachine: (no-preload-247197) Calling .Close
	I0229 19:04:21.932458   47515 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:04:21.932470   47515 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:04:21.932629   47515 main.go:141] libmachine: (no-preload-247197) DBG | Closing plugin on server side
	I0229 19:04:21.932649   47515 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:04:21.932656   47515 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:04:21.949312   47515 main.go:141] libmachine: Making call to close driver server
	I0229 19:04:21.949338   47515 main.go:141] libmachine: (no-preload-247197) Calling .Close
	I0229 19:04:21.949619   47515 main.go:141] libmachine: (no-preload-247197) DBG | Closing plugin on server side
	I0229 19:04:21.949662   47515 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:04:21.949675   47515 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:04:21.951119   47515 node_ready.go:49] node "no-preload-247197" has status "Ready":"True"
	I0229 19:04:21.951138   47515 node_ready.go:38] duration metric: took 19.687343ms waiting for node "no-preload-247197" to be "Ready" ...
	I0229 19:04:21.951148   47515 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 19:04:21.965909   47515 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-4k6hl" in "kube-system" namespace to be "Ready" ...
	I0229 19:04:21.979164   47515 pod_ready.go:92] pod "coredns-76f75df574-4k6hl" in "kube-system" namespace has status "Ready":"True"
	I0229 19:04:21.979185   47515 pod_ready.go:81] duration metric: took 13.25328ms waiting for pod "coredns-76f75df574-4k6hl" in "kube-system" namespace to be "Ready" ...
	I0229 19:04:21.979197   47515 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-9z6k5" in "kube-system" namespace to be "Ready" ...
	I0229 19:04:21.987905   47515 pod_ready.go:92] pod "coredns-76f75df574-9z6k5" in "kube-system" namespace has status "Ready":"True"
	I0229 19:04:21.987924   47515 pod_ready.go:81] duration metric: took 8.719445ms waiting for pod "coredns-76f75df574-9z6k5" in "kube-system" namespace to be "Ready" ...
	I0229 19:04:21.987935   47515 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-247197" in "kube-system" namespace to be "Ready" ...
	I0229 19:04:21.992310   47515 pod_ready.go:92] pod "etcd-no-preload-247197" in "kube-system" namespace has status "Ready":"True"
	I0229 19:04:21.992328   47515 pod_ready.go:81] duration metric: took 4.385196ms waiting for pod "etcd-no-preload-247197" in "kube-system" namespace to be "Ready" ...
	I0229 19:04:21.992339   47515 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-247197" in "kube-system" namespace to be "Ready" ...
	I0229 19:04:21.999702   47515 pod_ready.go:92] pod "kube-apiserver-no-preload-247197" in "kube-system" namespace has status "Ready":"True"
	I0229 19:04:21.999722   47515 pod_ready.go:81] duration metric: took 7.374368ms waiting for pod "kube-apiserver-no-preload-247197" in "kube-system" namespace to be "Ready" ...
	I0229 19:04:21.999733   47515 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-247197" in "kube-system" namespace to be "Ready" ...
	I0229 19:04:22.010201   47515 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.425431238s)
	I0229 19:04:22.010236   47515 main.go:141] libmachine: Making call to close driver server
	I0229 19:04:22.010249   47515 main.go:141] libmachine: (no-preload-247197) Calling .Close
	I0229 19:04:22.010564   47515 main.go:141] libmachine: (no-preload-247197) DBG | Closing plugin on server side
	I0229 19:04:22.010605   47515 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:04:22.010614   47515 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:04:22.010635   47515 main.go:141] libmachine: Making call to close driver server
	I0229 19:04:22.010644   47515 main.go:141] libmachine: (no-preload-247197) Calling .Close
	I0229 19:04:22.010882   47515 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:04:22.010900   47515 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:04:22.010910   47515 main.go:141] libmachine: (no-preload-247197) DBG | Closing plugin on server side
	I0229 19:04:22.010910   47515 addons.go:470] Verifying addon metrics-server=true in "no-preload-247197"
	I0229 19:04:22.013314   47515 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0229 19:04:22.014366   47515 addons.go:505] enable addons completed in 2.137254118s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0229 19:04:22.338772   47515 pod_ready.go:92] pod "kube-controller-manager-no-preload-247197" in "kube-system" namespace has status "Ready":"True"
	I0229 19:04:22.338799   47515 pod_ready.go:81] duration metric: took 339.058404ms waiting for pod "kube-controller-manager-no-preload-247197" in "kube-system" namespace to be "Ready" ...
	I0229 19:04:22.338812   47515 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vvkjv" in "kube-system" namespace to be "Ready" ...
	I0229 19:04:22.737254   47515 pod_ready.go:92] pod "kube-proxy-vvkjv" in "kube-system" namespace has status "Ready":"True"
	I0229 19:04:22.737280   47515 pod_ready.go:81] duration metric: took 398.461074ms waiting for pod "kube-proxy-vvkjv" in "kube-system" namespace to be "Ready" ...
	I0229 19:04:22.737294   47515 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-247197" in "kube-system" namespace to be "Ready" ...
	I0229 19:04:20.370710   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:22.866800   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:20.846680   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:23.345140   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:23.135406   47515 pod_ready.go:92] pod "kube-scheduler-no-preload-247197" in "kube-system" namespace has status "Ready":"True"
	I0229 19:04:23.135428   47515 pod_ready.go:81] duration metric: took 398.125345ms waiting for pod "kube-scheduler-no-preload-247197" in "kube-system" namespace to be "Ready" ...
	I0229 19:04:23.135440   47515 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace to be "Ready" ...
	I0229 19:04:25.142619   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:27.143696   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:25.367175   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:27.380854   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:25.346266   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:27.844825   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:29.846222   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:29.642557   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:32.143195   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:29.866361   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:32.365864   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:32.344240   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:34.345406   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:34.642612   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:36.642921   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:34.366701   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:36.865897   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:38.866354   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:36.845225   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:39.344488   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:39.142773   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:41.643462   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:40.866485   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:43.367569   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:41.345439   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:43.346065   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:44.142927   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:46.642548   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:45.369460   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:47.867209   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:45.845033   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:47.845603   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:48.643538   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:51.143346   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:50.365414   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:52.366281   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:50.609556   47919 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 19:04:50.610341   47919 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 19:04:50.610592   47919 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 19:04:50.347163   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:52.846321   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:54.847146   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:53.643605   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:55.644824   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:54.866162   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:57.366119   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:55.610941   47919 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 19:04:55.611235   47919 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 19:04:57.345852   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:59.846768   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:58.141799   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:00.142827   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:02.642593   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:59.867791   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:02.366238   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:02.345863   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:04.844340   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:04.643708   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:07.142551   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:04.367016   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:06.866170   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:08.869317   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:05.611726   47919 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 19:05:05.611996   47919 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 19:05:06.846686   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:08.846956   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:09.143595   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:11.143779   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:11.367337   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:13.865929   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:11.345732   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:13.346279   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:13.644332   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:16.143576   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:15.866653   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:18.366706   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:15.844887   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:17.846717   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:18.642599   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:20.642837   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:22.643895   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:20.368483   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:22.866758   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:20.346170   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:22.845477   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:25.142628   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:27.643975   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:25.366726   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:27.866780   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:25.612622   47919 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 19:05:25.612856   47919 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 19:05:25.346171   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:27.346624   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:29.844724   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:30.142942   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:32.143445   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:30.367152   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:32.865657   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:31.845835   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:34.347482   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:34.642780   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:36.642919   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:34.870444   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:37.367617   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:36.844507   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:38.845472   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:38.643505   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:41.142928   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:39.865207   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:41.867210   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:41.344604   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:43.347346   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:43.143348   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:45.143659   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:47.643054   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:44.366192   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:46.368043   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:48.867455   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:45.844395   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:47.845753   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:50.143481   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:52.643947   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:51.365758   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:53.866493   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:50.344819   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:52.346315   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:54.845777   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:55.145751   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:57.644326   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:55.866532   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:57.866831   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:56.845928   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:59.345840   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:00.142068   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:02.142779   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:59.870256   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:02.365280   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:01.845248   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:04.347842   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:05.613204   47919 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 19:06:05.613467   47919 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 19:06:05.613495   47919 kubeadm.go:322] 
	I0229 19:06:05.613547   47919 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 19:06:05.613598   47919 kubeadm.go:322] 	timed out waiting for the condition
	I0229 19:06:05.613608   47919 kubeadm.go:322] 
	I0229 19:06:05.613653   47919 kubeadm.go:322] This error is likely caused by:
	I0229 19:06:05.613694   47919 kubeadm.go:322] 	- The kubelet is not running
	I0229 19:06:05.613814   47919 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 19:06:05.613823   47919 kubeadm.go:322] 
	I0229 19:06:05.613911   47919 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 19:06:05.613941   47919 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 19:06:05.613974   47919 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 19:06:05.613980   47919 kubeadm.go:322] 
	I0229 19:06:05.614107   47919 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 19:06:05.614240   47919 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 19:06:05.614361   47919 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 19:06:05.614432   47919 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 19:06:05.614533   47919 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 19:06:05.614577   47919 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0229 19:06:05.615575   47919 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 19:06:05.615689   47919 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 19:06:05.615765   47919 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 19:06:05.615822   47919 kubeadm.go:406] StartCluster complete in 8m8.067253054s
	I0229 19:06:05.615873   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:06:05.615920   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:06:05.671959   47919 cri.go:89] found id: ""
	I0229 19:06:05.671998   47919 logs.go:276] 0 containers: []
	W0229 19:06:05.672018   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:06:05.672025   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:06:05.672075   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:06:05.715832   47919 cri.go:89] found id: ""
	I0229 19:06:05.715853   47919 logs.go:276] 0 containers: []
	W0229 19:06:05.715860   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:06:05.715866   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:06:05.715911   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:06:05.755305   47919 cri.go:89] found id: ""
	I0229 19:06:05.755334   47919 logs.go:276] 0 containers: []
	W0229 19:06:05.755345   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:06:05.755351   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:06:05.755409   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:06:05.807907   47919 cri.go:89] found id: ""
	I0229 19:06:05.807938   47919 logs.go:276] 0 containers: []
	W0229 19:06:05.807950   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:06:05.807957   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:06:05.808015   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:06:05.892777   47919 cri.go:89] found id: ""
	I0229 19:06:05.892805   47919 logs.go:276] 0 containers: []
	W0229 19:06:05.892813   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:06:05.892818   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:06:05.892877   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:06:05.931488   47919 cri.go:89] found id: ""
	I0229 19:06:05.931516   47919 logs.go:276] 0 containers: []
	W0229 19:06:05.931527   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:06:05.931534   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:06:05.931578   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:06:05.971989   47919 cri.go:89] found id: ""
	I0229 19:06:05.972018   47919 logs.go:276] 0 containers: []
	W0229 19:06:05.972030   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:06:05.972037   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:06:05.972112   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:06:06.008174   47919 cri.go:89] found id: ""
	I0229 19:06:06.008198   47919 logs.go:276] 0 containers: []
	W0229 19:06:06.008208   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:06:06.008224   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:06:06.008241   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:06:06.024924   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:06:06.024953   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:06:06.111879   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:06:06.111904   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:06:06.111918   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:06:06.221563   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:06:06.221593   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:06:06.266861   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:06:06.266897   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 19:06:06.314923   47919 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0229 19:06:06.314971   47919 out.go:239] * 
	W0229 19:06:06.315043   47919 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 19:06:06.315065   47919 out.go:239] * 
	W0229 19:06:06.315824   47919 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 19:06:06.318988   47919 out.go:177] 
	W0229 19:06:06.320200   47919 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 19:06:06.320245   47919 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0229 19:06:06.320270   47919 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0229 19:06:06.321598   47919 out.go:177] 
	I0229 19:06:04.143707   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:06.145980   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:04.366140   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:06.366873   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:08.366955   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:06.852698   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:09.348579   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:08.643671   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:11.143678   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:10.865166   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:12.866971   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:11.845538   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:14.346445   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:13.642537   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:15.643262   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:17.647209   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:15.366149   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:17.367209   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:16.845485   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:18.845671   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:19.647627   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:22.145622   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:19.866267   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:21.866857   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:20.845841   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:23.349149   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:24.646242   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:27.143078   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:24.368344   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:26.867329   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:25.846273   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:28.346226   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:29.642886   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:31.646657   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:29.365191   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:31.366142   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:33.865692   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:30.845019   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:32.845500   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:34.142811   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:36.144736   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:35.870114   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:38.365999   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:35.347102   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:37.347579   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:39.845962   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:38.642930   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:40.642989   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:42.645337   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:40.366651   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:42.865651   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:41.846699   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:44.348062   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:45.145291   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:47.643786   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:44.866389   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:47.365775   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:46.844303   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:48.845366   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:50.143250   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:52.642758   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:49.366973   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:51.865400   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:53.868123   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:51.345427   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:53.346292   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:54.643044   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:56.643641   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:56.366088   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:58.865505   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:55.845353   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:58.345421   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:58.644239   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:01.142462   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:01.374753   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:03.866228   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:00.345809   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:01.845528   47608 pod_ready.go:81] duration metric: took 4m0.007876165s waiting for pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace to be "Ready" ...
	E0229 19:07:01.845551   47608 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0229 19:07:01.845562   47608 pod_ready.go:38] duration metric: took 4m0.790976213s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 19:07:01.845581   47608 api_server.go:52] waiting for apiserver process to appear ...
	I0229 19:07:01.845611   47608 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:07:01.845671   47608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:07:01.901601   47608 cri.go:89] found id: "18f508cd43779fa9b03e93445f4d03abfe5f3e6291dc05cf116f16714d04ec96"
	I0229 19:07:01.901625   47608 cri.go:89] found id: ""
	I0229 19:07:01.901636   47608 logs.go:276] 1 containers: [18f508cd43779fa9b03e93445f4d03abfe5f3e6291dc05cf116f16714d04ec96]
	I0229 19:07:01.901693   47608 ssh_runner.go:195] Run: which crictl
	I0229 19:07:01.906698   47608 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:07:01.906771   47608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:07:01.947360   47608 cri.go:89] found id: "795516eef7b679f1eefaf293eab6b1683326e3fcb3f4e5f848194e4a7ab1566e"
	I0229 19:07:01.947383   47608 cri.go:89] found id: ""
	I0229 19:07:01.947395   47608 logs.go:276] 1 containers: [795516eef7b679f1eefaf293eab6b1683326e3fcb3f4e5f848194e4a7ab1566e]
	I0229 19:07:01.947453   47608 ssh_runner.go:195] Run: which crictl
	I0229 19:07:01.952251   47608 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:07:01.952314   47608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:07:01.996254   47608 cri.go:89] found id: "7220454898e1219d80ac33161ddc4116866d5cf1d1382767c1bf456cfa6b3c72"
	I0229 19:07:01.996279   47608 cri.go:89] found id: ""
	I0229 19:07:01.996289   47608 logs.go:276] 1 containers: [7220454898e1219d80ac33161ddc4116866d5cf1d1382767c1bf456cfa6b3c72]
	I0229 19:07:01.996346   47608 ssh_runner.go:195] Run: which crictl
	I0229 19:07:02.001158   47608 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:07:02.001229   47608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:07:02.039559   47608 cri.go:89] found id: "f1accc151694b40814505327a0743f46796dacf516f08043f58b1c42147e25fe"
	I0229 19:07:02.039583   47608 cri.go:89] found id: ""
	I0229 19:07:02.039593   47608 logs.go:276] 1 containers: [f1accc151694b40814505327a0743f46796dacf516f08043f58b1c42147e25fe]
	I0229 19:07:02.039653   47608 ssh_runner.go:195] Run: which crictl
	I0229 19:07:02.045320   47608 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:07:02.045439   47608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:07:02.091908   47608 cri.go:89] found id: "3327a9756b71a79f82db55da77bd2297066ea76c84094764ff2fcbd04e4f528d"
	I0229 19:07:02.091932   47608 cri.go:89] found id: ""
	I0229 19:07:02.091941   47608 logs.go:276] 1 containers: [3327a9756b71a79f82db55da77bd2297066ea76c84094764ff2fcbd04e4f528d]
	I0229 19:07:02.092002   47608 ssh_runner.go:195] Run: which crictl
	I0229 19:07:02.097461   47608 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:07:02.097533   47608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:07:02.142993   47608 cri.go:89] found id: "9099ab49263e57e41e6b7b28a87e810a951b495e3d0c879bf7ac045d0d2c2bd0"
	I0229 19:07:02.143017   47608 cri.go:89] found id: ""
	I0229 19:07:02.143043   47608 logs.go:276] 1 containers: [9099ab49263e57e41e6b7b28a87e810a951b495e3d0c879bf7ac045d0d2c2bd0]
	I0229 19:07:02.143114   47608 ssh_runner.go:195] Run: which crictl
	I0229 19:07:02.148395   47608 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:07:02.148469   47608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:07:02.189479   47608 cri.go:89] found id: ""
	I0229 19:07:02.189500   47608 logs.go:276] 0 containers: []
	W0229 19:07:02.189508   47608 logs.go:278] No container was found matching "kindnet"
	I0229 19:07:02.189513   47608 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 19:07:02.189567   47608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 19:07:02.237218   47608 cri.go:89] found id: "6d4d0c25cc63929278e7e44eb4cd9bc93f376b8bb76a05b78a29d9d9b9794ada"
	I0229 19:07:02.237238   47608 cri.go:89] found id: ""
	I0229 19:07:02.237246   47608 logs.go:276] 1 containers: [6d4d0c25cc63929278e7e44eb4cd9bc93f376b8bb76a05b78a29d9d9b9794ada]
	I0229 19:07:02.237299   47608 ssh_runner.go:195] Run: which crictl
	I0229 19:07:02.242232   47608 logs.go:123] Gathering logs for dmesg ...
	I0229 19:07:02.242256   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:07:02.258190   47608 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:07:02.258213   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 19:07:02.401759   47608 logs.go:123] Gathering logs for etcd [795516eef7b679f1eefaf293eab6b1683326e3fcb3f4e5f848194e4a7ab1566e] ...
	I0229 19:07:02.401786   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 795516eef7b679f1eefaf293eab6b1683326e3fcb3f4e5f848194e4a7ab1566e"
	I0229 19:07:02.455230   47608 logs.go:123] Gathering logs for kube-controller-manager [9099ab49263e57e41e6b7b28a87e810a951b495e3d0c879bf7ac045d0d2c2bd0] ...
	I0229 19:07:02.455256   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9099ab49263e57e41e6b7b28a87e810a951b495e3d0c879bf7ac045d0d2c2bd0"
	I0229 19:07:02.507842   47608 logs.go:123] Gathering logs for container status ...
	I0229 19:07:02.507870   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:07:02.562721   47608 logs.go:123] Gathering logs for kubelet ...
	I0229 19:07:02.562747   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:07:02.655664   47608 logs.go:123] Gathering logs for kube-apiserver [18f508cd43779fa9b03e93445f4d03abfe5f3e6291dc05cf116f16714d04ec96] ...
	I0229 19:07:02.655696   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18f508cd43779fa9b03e93445f4d03abfe5f3e6291dc05cf116f16714d04ec96"
	I0229 19:07:02.711422   47608 logs.go:123] Gathering logs for coredns [7220454898e1219d80ac33161ddc4116866d5cf1d1382767c1bf456cfa6b3c72] ...
	I0229 19:07:02.711450   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7220454898e1219d80ac33161ddc4116866d5cf1d1382767c1bf456cfa6b3c72"
	I0229 19:07:02.763124   47608 logs.go:123] Gathering logs for kube-scheduler [f1accc151694b40814505327a0743f46796dacf516f08043f58b1c42147e25fe] ...
	I0229 19:07:02.763151   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1accc151694b40814505327a0743f46796dacf516f08043f58b1c42147e25fe"
	I0229 19:07:02.812093   47608 logs.go:123] Gathering logs for kube-proxy [3327a9756b71a79f82db55da77bd2297066ea76c84094764ff2fcbd04e4f528d] ...
	I0229 19:07:02.812126   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3327a9756b71a79f82db55da77bd2297066ea76c84094764ff2fcbd04e4f528d"
	I0229 19:07:02.863781   47608 logs.go:123] Gathering logs for storage-provisioner [6d4d0c25cc63929278e7e44eb4cd9bc93f376b8bb76a05b78a29d9d9b9794ada] ...
	I0229 19:07:02.863810   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d4d0c25cc63929278e7e44eb4cd9bc93f376b8bb76a05b78a29d9d9b9794ada"
	I0229 19:07:02.909931   47608 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:07:02.909956   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:07:03.148571   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:05.642292   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:07.646950   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:05.866773   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:08.364842   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:05.846592   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:07:05.868139   47608 api_server.go:72] duration metric: took 4m6.97199894s to wait for apiserver process to appear ...
	I0229 19:07:05.868162   47608 api_server.go:88] waiting for apiserver healthz status ...
	I0229 19:07:05.868198   47608 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:07:05.868254   47608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:07:05.911179   47608 cri.go:89] found id: "18f508cd43779fa9b03e93445f4d03abfe5f3e6291dc05cf116f16714d04ec96"
	I0229 19:07:05.911204   47608 cri.go:89] found id: ""
	I0229 19:07:05.911213   47608 logs.go:276] 1 containers: [18f508cd43779fa9b03e93445f4d03abfe5f3e6291dc05cf116f16714d04ec96]
	I0229 19:07:05.911283   47608 ssh_runner.go:195] Run: which crictl
	I0229 19:07:05.917051   47608 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:07:05.917127   47608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:07:05.958278   47608 cri.go:89] found id: "795516eef7b679f1eefaf293eab6b1683326e3fcb3f4e5f848194e4a7ab1566e"
	I0229 19:07:05.958304   47608 cri.go:89] found id: ""
	I0229 19:07:05.958312   47608 logs.go:276] 1 containers: [795516eef7b679f1eefaf293eab6b1683326e3fcb3f4e5f848194e4a7ab1566e]
	I0229 19:07:05.958366   47608 ssh_runner.go:195] Run: which crictl
	I0229 19:07:05.963467   47608 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:07:05.963538   47608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:07:06.003497   47608 cri.go:89] found id: "7220454898e1219d80ac33161ddc4116866d5cf1d1382767c1bf456cfa6b3c72"
	I0229 19:07:06.003516   47608 cri.go:89] found id: ""
	I0229 19:07:06.003525   47608 logs.go:276] 1 containers: [7220454898e1219d80ac33161ddc4116866d5cf1d1382767c1bf456cfa6b3c72]
	I0229 19:07:06.003578   47608 ssh_runner.go:195] Run: which crictl
	I0229 19:07:06.008829   47608 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:07:06.008900   47608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:07:06.048632   47608 cri.go:89] found id: "f1accc151694b40814505327a0743f46796dacf516f08043f58b1c42147e25fe"
	I0229 19:07:06.048654   47608 cri.go:89] found id: ""
	I0229 19:07:06.048662   47608 logs.go:276] 1 containers: [f1accc151694b40814505327a0743f46796dacf516f08043f58b1c42147e25fe]
	I0229 19:07:06.048719   47608 ssh_runner.go:195] Run: which crictl
	I0229 19:07:06.053674   47608 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:07:06.053725   47608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:07:06.095377   47608 cri.go:89] found id: "3327a9756b71a79f82db55da77bd2297066ea76c84094764ff2fcbd04e4f528d"
	I0229 19:07:06.095398   47608 cri.go:89] found id: ""
	I0229 19:07:06.095406   47608 logs.go:276] 1 containers: [3327a9756b71a79f82db55da77bd2297066ea76c84094764ff2fcbd04e4f528d]
	I0229 19:07:06.095455   47608 ssh_runner.go:195] Run: which crictl
	I0229 19:07:06.100277   47608 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:07:06.100344   47608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:07:06.141330   47608 cri.go:89] found id: "9099ab49263e57e41e6b7b28a87e810a951b495e3d0c879bf7ac045d0d2c2bd0"
	I0229 19:07:06.141351   47608 cri.go:89] found id: ""
	I0229 19:07:06.141361   47608 logs.go:276] 1 containers: [9099ab49263e57e41e6b7b28a87e810a951b495e3d0c879bf7ac045d0d2c2bd0]
	I0229 19:07:06.141418   47608 ssh_runner.go:195] Run: which crictl
	I0229 19:07:06.146628   47608 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:07:06.146675   47608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:07:06.195525   47608 cri.go:89] found id: ""
	I0229 19:07:06.195552   47608 logs.go:276] 0 containers: []
	W0229 19:07:06.195563   47608 logs.go:278] No container was found matching "kindnet"
	I0229 19:07:06.195570   47608 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 19:07:06.195626   47608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 19:07:06.242893   47608 cri.go:89] found id: "6d4d0c25cc63929278e7e44eb4cd9bc93f376b8bb76a05b78a29d9d9b9794ada"
	I0229 19:07:06.242912   47608 cri.go:89] found id: ""
	I0229 19:07:06.242918   47608 logs.go:276] 1 containers: [6d4d0c25cc63929278e7e44eb4cd9bc93f376b8bb76a05b78a29d9d9b9794ada]
	I0229 19:07:06.242963   47608 ssh_runner.go:195] Run: which crictl
	I0229 19:07:06.247876   47608 logs.go:123] Gathering logs for dmesg ...
	I0229 19:07:06.247894   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:07:06.264869   47608 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:07:06.264905   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 19:07:06.403612   47608 logs.go:123] Gathering logs for kube-apiserver [18f508cd43779fa9b03e93445f4d03abfe5f3e6291dc05cf116f16714d04ec96] ...
	I0229 19:07:06.403639   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18f508cd43779fa9b03e93445f4d03abfe5f3e6291dc05cf116f16714d04ec96"
	I0229 19:07:06.468541   47608 logs.go:123] Gathering logs for etcd [795516eef7b679f1eefaf293eab6b1683326e3fcb3f4e5f848194e4a7ab1566e] ...
	I0229 19:07:06.468569   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 795516eef7b679f1eefaf293eab6b1683326e3fcb3f4e5f848194e4a7ab1566e"
	I0229 19:07:06.523984   47608 logs.go:123] Gathering logs for kube-proxy [3327a9756b71a79f82db55da77bd2297066ea76c84094764ff2fcbd04e4f528d] ...
	I0229 19:07:06.524016   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3327a9756b71a79f82db55da77bd2297066ea76c84094764ff2fcbd04e4f528d"
	I0229 19:07:06.599105   47608 logs.go:123] Gathering logs for kube-controller-manager [9099ab49263e57e41e6b7b28a87e810a951b495e3d0c879bf7ac045d0d2c2bd0] ...
	I0229 19:07:06.599133   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9099ab49263e57e41e6b7b28a87e810a951b495e3d0c879bf7ac045d0d2c2bd0"
	I0229 19:07:06.672044   47608 logs.go:123] Gathering logs for kubelet ...
	I0229 19:07:06.672074   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:07:06.772478   47608 logs.go:123] Gathering logs for coredns [7220454898e1219d80ac33161ddc4116866d5cf1d1382767c1bf456cfa6b3c72] ...
	I0229 19:07:06.772509   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7220454898e1219d80ac33161ddc4116866d5cf1d1382767c1bf456cfa6b3c72"
	I0229 19:07:06.817949   47608 logs.go:123] Gathering logs for kube-scheduler [f1accc151694b40814505327a0743f46796dacf516f08043f58b1c42147e25fe] ...
	I0229 19:07:06.817978   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1accc151694b40814505327a0743f46796dacf516f08043f58b1c42147e25fe"
	I0229 19:07:06.866713   47608 logs.go:123] Gathering logs for storage-provisioner [6d4d0c25cc63929278e7e44eb4cd9bc93f376b8bb76a05b78a29d9d9b9794ada] ...
	I0229 19:07:06.866743   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d4d0c25cc63929278e7e44eb4cd9bc93f376b8bb76a05b78a29d9d9b9794ada"
	I0229 19:07:06.912206   47608 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:07:06.912234   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:07:07.320100   47608 logs.go:123] Gathering logs for container status ...
	I0229 19:07:07.320136   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:07:09.875603   47608 api_server.go:253] Checking apiserver healthz at https://192.168.61.34:8443/healthz ...
	I0229 19:07:09.884525   47608 api_server.go:279] https://192.168.61.34:8443/healthz returned 200:
	ok
	I0229 19:07:09.886045   47608 api_server.go:141] control plane version: v1.28.4
	I0229 19:07:09.886063   47608 api_server.go:131] duration metric: took 4.017895877s to wait for apiserver health ...
	I0229 19:07:09.886071   47608 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 19:07:09.886091   47608 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:07:09.886137   47608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:07:09.940809   47608 cri.go:89] found id: "18f508cd43779fa9b03e93445f4d03abfe5f3e6291dc05cf116f16714d04ec96"
	I0229 19:07:09.940831   47608 cri.go:89] found id: ""
	I0229 19:07:09.940838   47608 logs.go:276] 1 containers: [18f508cd43779fa9b03e93445f4d03abfe5f3e6291dc05cf116f16714d04ec96]
	I0229 19:07:09.940901   47608 ssh_runner.go:195] Run: which crictl
	I0229 19:07:09.945610   47608 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:07:09.945668   47608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:07:09.995270   47608 cri.go:89] found id: "795516eef7b679f1eefaf293eab6b1683326e3fcb3f4e5f848194e4a7ab1566e"
	I0229 19:07:09.995291   47608 cri.go:89] found id: ""
	I0229 19:07:09.995299   47608 logs.go:276] 1 containers: [795516eef7b679f1eefaf293eab6b1683326e3fcb3f4e5f848194e4a7ab1566e]
	I0229 19:07:09.995353   47608 ssh_runner.go:195] Run: which crictl
	I0229 19:07:10.000358   47608 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:07:10.000431   47608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:07:10.052073   47608 cri.go:89] found id: "7220454898e1219d80ac33161ddc4116866d5cf1d1382767c1bf456cfa6b3c72"
	I0229 19:07:10.052094   47608 cri.go:89] found id: ""
	I0229 19:07:10.052103   47608 logs.go:276] 1 containers: [7220454898e1219d80ac33161ddc4116866d5cf1d1382767c1bf456cfa6b3c72]
	I0229 19:07:10.052164   47608 ssh_runner.go:195] Run: which crictl
	I0229 19:07:10.058993   47608 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:07:10.059071   47608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:07:10.110467   47608 cri.go:89] found id: "f1accc151694b40814505327a0743f46796dacf516f08043f58b1c42147e25fe"
	I0229 19:07:10.110494   47608 cri.go:89] found id: ""
	I0229 19:07:10.110501   47608 logs.go:276] 1 containers: [f1accc151694b40814505327a0743f46796dacf516f08043f58b1c42147e25fe]
	I0229 19:07:10.110556   47608 ssh_runner.go:195] Run: which crictl
	I0229 19:07:10.115491   47608 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:07:10.115545   47608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:07:10.159522   47608 cri.go:89] found id: "3327a9756b71a79f82db55da77bd2297066ea76c84094764ff2fcbd04e4f528d"
	I0229 19:07:10.159540   47608 cri.go:89] found id: ""
	I0229 19:07:10.159548   47608 logs.go:276] 1 containers: [3327a9756b71a79f82db55da77bd2297066ea76c84094764ff2fcbd04e4f528d]
	I0229 19:07:10.159602   47608 ssh_runner.go:195] Run: which crictl
	I0229 19:07:10.164162   47608 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:07:10.164223   47608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:07:10.204583   47608 cri.go:89] found id: "9099ab49263e57e41e6b7b28a87e810a951b495e3d0c879bf7ac045d0d2c2bd0"
	I0229 19:07:10.204602   47608 cri.go:89] found id: ""
	I0229 19:07:10.204623   47608 logs.go:276] 1 containers: [9099ab49263e57e41e6b7b28a87e810a951b495e3d0c879bf7ac045d0d2c2bd0]
	I0229 19:07:10.204699   47608 ssh_runner.go:195] Run: which crictl
	I0229 19:07:10.209550   47608 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:07:10.209602   47608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:07:10.246884   47608 cri.go:89] found id: ""
	I0229 19:07:10.246907   47608 logs.go:276] 0 containers: []
	W0229 19:07:10.246915   47608 logs.go:278] No container was found matching "kindnet"
	I0229 19:07:10.246925   47608 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 19:07:10.246970   47608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 19:07:10.142347   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:12.142912   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:10.286397   47608 cri.go:89] found id: "6d4d0c25cc63929278e7e44eb4cd9bc93f376b8bb76a05b78a29d9d9b9794ada"
	I0229 19:07:10.286420   47608 cri.go:89] found id: ""
	I0229 19:07:10.286429   47608 logs.go:276] 1 containers: [6d4d0c25cc63929278e7e44eb4cd9bc93f376b8bb76a05b78a29d9d9b9794ada]
	I0229 19:07:10.286476   47608 ssh_runner.go:195] Run: which crictl
	I0229 19:07:10.292279   47608 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:07:10.292303   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 19:07:10.432648   47608 logs.go:123] Gathering logs for kube-apiserver [18f508cd43779fa9b03e93445f4d03abfe5f3e6291dc05cf116f16714d04ec96] ...
	I0229 19:07:10.432683   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18f508cd43779fa9b03e93445f4d03abfe5f3e6291dc05cf116f16714d04ec96"
	I0229 19:07:10.485438   47608 logs.go:123] Gathering logs for etcd [795516eef7b679f1eefaf293eab6b1683326e3fcb3f4e5f848194e4a7ab1566e] ...
	I0229 19:07:10.485468   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 795516eef7b679f1eefaf293eab6b1683326e3fcb3f4e5f848194e4a7ab1566e"
	I0229 19:07:10.532671   47608 logs.go:123] Gathering logs for coredns [7220454898e1219d80ac33161ddc4116866d5cf1d1382767c1bf456cfa6b3c72] ...
	I0229 19:07:10.532702   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7220454898e1219d80ac33161ddc4116866d5cf1d1382767c1bf456cfa6b3c72"
	I0229 19:07:10.574743   47608 logs.go:123] Gathering logs for kube-scheduler [f1accc151694b40814505327a0743f46796dacf516f08043f58b1c42147e25fe] ...
	I0229 19:07:10.574768   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1accc151694b40814505327a0743f46796dacf516f08043f58b1c42147e25fe"
	I0229 19:07:10.625137   47608 logs.go:123] Gathering logs for kube-proxy [3327a9756b71a79f82db55da77bd2297066ea76c84094764ff2fcbd04e4f528d] ...
	I0229 19:07:10.625164   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3327a9756b71a79f82db55da77bd2297066ea76c84094764ff2fcbd04e4f528d"
	I0229 19:07:10.669432   47608 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:07:10.669457   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:07:11.008876   47608 logs.go:123] Gathering logs for container status ...
	I0229 19:07:11.008906   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:07:11.060752   47608 logs.go:123] Gathering logs for kubelet ...
	I0229 19:07:11.060785   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:07:11.167311   47608 logs.go:123] Gathering logs for dmesg ...
	I0229 19:07:11.167344   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:07:11.185133   47608 logs.go:123] Gathering logs for kube-controller-manager [9099ab49263e57e41e6b7b28a87e810a951b495e3d0c879bf7ac045d0d2c2bd0] ...
	I0229 19:07:11.185160   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9099ab49263e57e41e6b7b28a87e810a951b495e3d0c879bf7ac045d0d2c2bd0"
	I0229 19:07:11.251587   47608 logs.go:123] Gathering logs for storage-provisioner [6d4d0c25cc63929278e7e44eb4cd9bc93f376b8bb76a05b78a29d9d9b9794ada] ...
	I0229 19:07:11.251614   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d4d0c25cc63929278e7e44eb4cd9bc93f376b8bb76a05b78a29d9d9b9794ada"
	I0229 19:07:13.809877   47608 system_pods.go:59] 8 kube-system pods found
	I0229 19:07:13.809904   47608 system_pods.go:61] "coredns-5dd5756b68-nth8z" [eeec9c32-9f61-4cb7-b1fb-3dd75c5af668] Running
	I0229 19:07:13.809910   47608 system_pods.go:61] "etcd-embed-certs-991128" [59422cbb-1dd9-49de-8a33-5722c44673db] Running
	I0229 19:07:13.809915   47608 system_pods.go:61] "kube-apiserver-embed-certs-991128" [7575302f-597d-4ffc-9411-12fa4e1d4e8d] Running
	I0229 19:07:13.809920   47608 system_pods.go:61] "kube-controller-manager-embed-certs-991128" [e9cbc6cc-5910-4807-95dd-3ec88a184ec2] Running
	I0229 19:07:13.809924   47608 system_pods.go:61] "kube-proxy-5grst" [35524449-8c5a-440d-a45f-ce631ebff076] Running
	I0229 19:07:13.809928   47608 system_pods.go:61] "kube-scheduler-embed-certs-991128" [e95aeb48-8783-4620-89e0-7454e9cd251d] Running
	I0229 19:07:13.809937   47608 system_pods.go:61] "metrics-server-57f55c9bc5-r66xw" [8eb63357-6b36-49f3-98a5-c74bb4a9b09c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 19:07:13.809945   47608 system_pods.go:61] "storage-provisioner" [a9ce642e-81dc-4dd7-be8e-3796e19f8f03] Running
	I0229 19:07:13.809957   47608 system_pods.go:74] duration metric: took 3.923878638s to wait for pod list to return data ...
	I0229 19:07:13.809967   47608 default_sa.go:34] waiting for default service account to be created ...
	I0229 19:07:13.814425   47608 default_sa.go:45] found service account: "default"
	I0229 19:07:13.814451   47608 default_sa.go:55] duration metric: took 4.476391ms for default service account to be created ...
	I0229 19:07:13.814463   47608 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 19:07:13.822812   47608 system_pods.go:86] 8 kube-system pods found
	I0229 19:07:13.822834   47608 system_pods.go:89] "coredns-5dd5756b68-nth8z" [eeec9c32-9f61-4cb7-b1fb-3dd75c5af668] Running
	I0229 19:07:13.822842   47608 system_pods.go:89] "etcd-embed-certs-991128" [59422cbb-1dd9-49de-8a33-5722c44673db] Running
	I0229 19:07:13.822849   47608 system_pods.go:89] "kube-apiserver-embed-certs-991128" [7575302f-597d-4ffc-9411-12fa4e1d4e8d] Running
	I0229 19:07:13.822856   47608 system_pods.go:89] "kube-controller-manager-embed-certs-991128" [e9cbc6cc-5910-4807-95dd-3ec88a184ec2] Running
	I0229 19:07:13.822864   47608 system_pods.go:89] "kube-proxy-5grst" [35524449-8c5a-440d-a45f-ce631ebff076] Running
	I0229 19:07:13.822871   47608 system_pods.go:89] "kube-scheduler-embed-certs-991128" [e95aeb48-8783-4620-89e0-7454e9cd251d] Running
	I0229 19:07:13.822883   47608 system_pods.go:89] "metrics-server-57f55c9bc5-r66xw" [8eb63357-6b36-49f3-98a5-c74bb4a9b09c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 19:07:13.822893   47608 system_pods.go:89] "storage-provisioner" [a9ce642e-81dc-4dd7-be8e-3796e19f8f03] Running
	I0229 19:07:13.822908   47608 system_pods.go:126] duration metric: took 8.437411ms to wait for k8s-apps to be running ...
	I0229 19:07:13.822919   47608 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 19:07:13.822973   47608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 19:07:13.841166   47608 system_svc.go:56] duration metric: took 18.240886ms WaitForService to wait for kubelet.
	I0229 19:07:13.841190   47608 kubeadm.go:581] duration metric: took 4m14.94505166s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 19:07:13.841213   47608 node_conditions.go:102] verifying NodePressure condition ...
	I0229 19:07:13.844369   47608 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 19:07:13.844393   47608 node_conditions.go:123] node cpu capacity is 2
	I0229 19:07:13.844404   47608 node_conditions.go:105] duration metric: took 3.186855ms to run NodePressure ...
	I0229 19:07:13.844416   47608 start.go:228] waiting for startup goroutines ...
	I0229 19:07:13.844425   47608 start.go:233] waiting for cluster config update ...
	I0229 19:07:13.844438   47608 start.go:242] writing updated cluster config ...
	I0229 19:07:13.844737   47608 ssh_runner.go:195] Run: rm -f paused
	I0229 19:07:13.894129   47608 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0229 19:07:13.896056   47608 out.go:177] * Done! kubectl is now configured to use "embed-certs-991128" cluster and "default" namespace by default
	I0229 19:07:10.367615   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:12.866425   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:14.145357   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:16.642943   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:14.867561   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:17.366556   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:19.143410   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:21.147970   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:19.367285   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:21.865048   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:23.868674   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:23.643039   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:25.643205   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:27.643525   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:25.869656   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:28.369270   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:30.142250   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:32.142304   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:30.865630   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:32.870509   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:34.143254   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:36.645374   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:35.367229   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:37.865920   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:38.646004   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:41.146450   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:40.368452   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:42.866110   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:43.643363   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:45.643443   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:47.644208   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:44.868350   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:45.865595   48088 pod_ready.go:81] duration metric: took 4m0.007156363s waiting for pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace to be "Ready" ...
	E0229 19:07:45.865618   48088 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0229 19:07:45.865628   48088 pod_ready.go:38] duration metric: took 4m1.182191329s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 19:07:45.865647   48088 api_server.go:52] waiting for apiserver process to appear ...
	I0229 19:07:45.865681   48088 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:07:45.865737   48088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:07:45.924104   48088 cri.go:89] found id: "afb68f5e908cebe734b371147974874028af491c708e5cb5198d1f84d7c1a1ec"
	I0229 19:07:45.924127   48088 cri.go:89] found id: ""
	I0229 19:07:45.924136   48088 logs.go:276] 1 containers: [afb68f5e908cebe734b371147974874028af491c708e5cb5198d1f84d7c1a1ec]
	I0229 19:07:45.924194   48088 ssh_runner.go:195] Run: which crictl
	I0229 19:07:45.929769   48088 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:07:45.929823   48088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:07:45.973018   48088 cri.go:89] found id: "ea63327422de95ad071cbb5dd8bd32fb4b052a80e014485ef75bf233da53bebf"
	I0229 19:07:45.973039   48088 cri.go:89] found id: ""
	I0229 19:07:45.973048   48088 logs.go:276] 1 containers: [ea63327422de95ad071cbb5dd8bd32fb4b052a80e014485ef75bf233da53bebf]
	I0229 19:07:45.973102   48088 ssh_runner.go:195] Run: which crictl
	I0229 19:07:45.978222   48088 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:07:45.978284   48088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:07:46.019965   48088 cri.go:89] found id: "f3783ae6a7523e08992c41219eb196d6d50ec4a3033a63f5ac801053aac04cc3"
	I0229 19:07:46.019984   48088 cri.go:89] found id: ""
	I0229 19:07:46.019991   48088 logs.go:276] 1 containers: [f3783ae6a7523e08992c41219eb196d6d50ec4a3033a63f5ac801053aac04cc3]
	I0229 19:07:46.020046   48088 ssh_runner.go:195] Run: which crictl
	I0229 19:07:46.024852   48088 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:07:46.024909   48088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:07:46.067904   48088 cri.go:89] found id: "7ad8f5f1b340cedf651a9032d3eb4e30640730f4284611f933e96f4ecf76ecff"
	I0229 19:07:46.067921   48088 cri.go:89] found id: ""
	I0229 19:07:46.067928   48088 logs.go:276] 1 containers: [7ad8f5f1b340cedf651a9032d3eb4e30640730f4284611f933e96f4ecf76ecff]
	I0229 19:07:46.067970   48088 ssh_runner.go:195] Run: which crictl
	I0229 19:07:46.073790   48088 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:07:46.073855   48088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:07:46.113273   48088 cri.go:89] found id: "66a474fccaab4249c196a12339ffecf4b3a4a079d53db11cc6f1b249574da09f"
	I0229 19:07:46.113299   48088 cri.go:89] found id: ""
	I0229 19:07:46.113320   48088 logs.go:276] 1 containers: [66a474fccaab4249c196a12339ffecf4b3a4a079d53db11cc6f1b249574da09f]
	I0229 19:07:46.113375   48088 ssh_runner.go:195] Run: which crictl
	I0229 19:07:46.118626   48088 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:07:46.118692   48088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:07:46.169986   48088 cri.go:89] found id: "f9076d6488b1c81a82f5384182b068e3cd06217027b2030717efdbf4b6df76a3"
	I0229 19:07:46.170008   48088 cri.go:89] found id: ""
	I0229 19:07:46.170017   48088 logs.go:276] 1 containers: [f9076d6488b1c81a82f5384182b068e3cd06217027b2030717efdbf4b6df76a3]
	I0229 19:07:46.170065   48088 ssh_runner.go:195] Run: which crictl
	I0229 19:07:46.175639   48088 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:07:46.175699   48088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:07:46.220353   48088 cri.go:89] found id: ""
	I0229 19:07:46.220383   48088 logs.go:276] 0 containers: []
	W0229 19:07:46.220394   48088 logs.go:278] No container was found matching "kindnet"
	I0229 19:07:46.220402   48088 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 19:07:46.220460   48088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 19:07:46.267009   48088 cri.go:89] found id: "dd100a6a78ff38195f8c50e042b5378c2cd6e061b69c596fd75b45030c7abb3f"
	I0229 19:07:46.267045   48088 cri.go:89] found id: ""
	I0229 19:07:46.267055   48088 logs.go:276] 1 containers: [dd100a6a78ff38195f8c50e042b5378c2cd6e061b69c596fd75b45030c7abb3f]
	I0229 19:07:46.267105   48088 ssh_runner.go:195] Run: which crictl
	I0229 19:07:46.272422   48088 logs.go:123] Gathering logs for kube-controller-manager [f9076d6488b1c81a82f5384182b068e3cd06217027b2030717efdbf4b6df76a3] ...
	I0229 19:07:46.272445   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9076d6488b1c81a82f5384182b068e3cd06217027b2030717efdbf4b6df76a3"
	I0229 19:07:46.337524   48088 logs.go:123] Gathering logs for kubelet ...
	I0229 19:07:46.337554   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:07:46.454444   48088 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:07:46.454484   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 19:07:46.601211   48088 logs.go:123] Gathering logs for kube-apiserver [afb68f5e908cebe734b371147974874028af491c708e5cb5198d1f84d7c1a1ec] ...
	I0229 19:07:46.601239   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 afb68f5e908cebe734b371147974874028af491c708e5cb5198d1f84d7c1a1ec"
	I0229 19:07:46.661763   48088 logs.go:123] Gathering logs for coredns [f3783ae6a7523e08992c41219eb196d6d50ec4a3033a63f5ac801053aac04cc3] ...
	I0229 19:07:46.661794   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f3783ae6a7523e08992c41219eb196d6d50ec4a3033a63f5ac801053aac04cc3"
	I0229 19:07:46.707569   48088 logs.go:123] Gathering logs for kube-scheduler [7ad8f5f1b340cedf651a9032d3eb4e30640730f4284611f933e96f4ecf76ecff] ...
	I0229 19:07:46.707594   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7ad8f5f1b340cedf651a9032d3eb4e30640730f4284611f933e96f4ecf76ecff"
	I0229 19:07:46.774076   48088 logs.go:123] Gathering logs for kube-proxy [66a474fccaab4249c196a12339ffecf4b3a4a079d53db11cc6f1b249574da09f] ...
	I0229 19:07:46.774107   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66a474fccaab4249c196a12339ffecf4b3a4a079d53db11cc6f1b249574da09f"
	I0229 19:07:46.821259   48088 logs.go:123] Gathering logs for dmesg ...
	I0229 19:07:46.821288   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:07:46.837496   48088 logs.go:123] Gathering logs for etcd [ea63327422de95ad071cbb5dd8bd32fb4b052a80e014485ef75bf233da53bebf] ...
	I0229 19:07:46.837519   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea63327422de95ad071cbb5dd8bd32fb4b052a80e014485ef75bf233da53bebf"
	I0229 19:07:46.890812   48088 logs.go:123] Gathering logs for storage-provisioner [dd100a6a78ff38195f8c50e042b5378c2cd6e061b69c596fd75b45030c7abb3f] ...
	I0229 19:07:46.890841   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dd100a6a78ff38195f8c50e042b5378c2cd6e061b69c596fd75b45030c7abb3f"
	I0229 19:07:46.934532   48088 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:07:46.934559   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:07:47.395235   48088 logs.go:123] Gathering logs for container status ...
	I0229 19:07:47.395269   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:07:50.144146   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:52.144673   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:49.959190   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:07:49.978381   48088 api_server.go:72] duration metric: took 4m7.681437754s to wait for apiserver process to appear ...
	I0229 19:07:49.978407   48088 api_server.go:88] waiting for apiserver healthz status ...
	I0229 19:07:49.978464   48088 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:07:49.978513   48088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:07:50.028150   48088 cri.go:89] found id: "afb68f5e908cebe734b371147974874028af491c708e5cb5198d1f84d7c1a1ec"
	I0229 19:07:50.028176   48088 cri.go:89] found id: ""
	I0229 19:07:50.028186   48088 logs.go:276] 1 containers: [afb68f5e908cebe734b371147974874028af491c708e5cb5198d1f84d7c1a1ec]
	I0229 19:07:50.028242   48088 ssh_runner.go:195] Run: which crictl
	I0229 19:07:50.033649   48088 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:07:50.033719   48088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:07:50.083761   48088 cri.go:89] found id: "ea63327422de95ad071cbb5dd8bd32fb4b052a80e014485ef75bf233da53bebf"
	I0229 19:07:50.083785   48088 cri.go:89] found id: ""
	I0229 19:07:50.083795   48088 logs.go:276] 1 containers: [ea63327422de95ad071cbb5dd8bd32fb4b052a80e014485ef75bf233da53bebf]
	I0229 19:07:50.083866   48088 ssh_runner.go:195] Run: which crictl
	I0229 19:07:50.088829   48088 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:07:50.088913   48088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:07:50.138098   48088 cri.go:89] found id: "f3783ae6a7523e08992c41219eb196d6d50ec4a3033a63f5ac801053aac04cc3"
	I0229 19:07:50.138120   48088 cri.go:89] found id: ""
	I0229 19:07:50.138148   48088 logs.go:276] 1 containers: [f3783ae6a7523e08992c41219eb196d6d50ec4a3033a63f5ac801053aac04cc3]
	I0229 19:07:50.138203   48088 ssh_runner.go:195] Run: which crictl
	I0229 19:07:50.143751   48088 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:07:50.143824   48088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:07:50.181953   48088 cri.go:89] found id: "7ad8f5f1b340cedf651a9032d3eb4e30640730f4284611f933e96f4ecf76ecff"
	I0229 19:07:50.181973   48088 cri.go:89] found id: ""
	I0229 19:07:50.182005   48088 logs.go:276] 1 containers: [7ad8f5f1b340cedf651a9032d3eb4e30640730f4284611f933e96f4ecf76ecff]
	I0229 19:07:50.182061   48088 ssh_runner.go:195] Run: which crictl
	I0229 19:07:50.187673   48088 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:07:50.187738   48088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:07:50.239764   48088 cri.go:89] found id: "66a474fccaab4249c196a12339ffecf4b3a4a079d53db11cc6f1b249574da09f"
	I0229 19:07:50.239787   48088 cri.go:89] found id: ""
	I0229 19:07:50.239797   48088 logs.go:276] 1 containers: [66a474fccaab4249c196a12339ffecf4b3a4a079d53db11cc6f1b249574da09f]
	I0229 19:07:50.239945   48088 ssh_runner.go:195] Run: which crictl
	I0229 19:07:50.244916   48088 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:07:50.244980   48088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:07:50.285741   48088 cri.go:89] found id: "f9076d6488b1c81a82f5384182b068e3cd06217027b2030717efdbf4b6df76a3"
	I0229 19:07:50.285764   48088 cri.go:89] found id: ""
	I0229 19:07:50.285774   48088 logs.go:276] 1 containers: [f9076d6488b1c81a82f5384182b068e3cd06217027b2030717efdbf4b6df76a3]
	I0229 19:07:50.285833   48088 ssh_runner.go:195] Run: which crictl
	I0229 19:07:50.290537   48088 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:07:50.290607   48088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:07:50.334081   48088 cri.go:89] found id: ""
	I0229 19:07:50.334113   48088 logs.go:276] 0 containers: []
	W0229 19:07:50.334125   48088 logs.go:278] No container was found matching "kindnet"
	I0229 19:07:50.334133   48088 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 19:07:50.334218   48088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 19:07:50.382210   48088 cri.go:89] found id: "dd100a6a78ff38195f8c50e042b5378c2cd6e061b69c596fd75b45030c7abb3f"
	I0229 19:07:50.382240   48088 cri.go:89] found id: ""
	I0229 19:07:50.382249   48088 logs.go:276] 1 containers: [dd100a6a78ff38195f8c50e042b5378c2cd6e061b69c596fd75b45030c7abb3f]
	I0229 19:07:50.382309   48088 ssh_runner.go:195] Run: which crictl
	I0229 19:07:50.387638   48088 logs.go:123] Gathering logs for dmesg ...
	I0229 19:07:50.387659   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:07:50.402846   48088 logs.go:123] Gathering logs for kube-proxy [66a474fccaab4249c196a12339ffecf4b3a4a079d53db11cc6f1b249574da09f] ...
	I0229 19:07:50.402871   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66a474fccaab4249c196a12339ffecf4b3a4a079d53db11cc6f1b249574da09f"
	I0229 19:07:50.449452   48088 logs.go:123] Gathering logs for kube-controller-manager [f9076d6488b1c81a82f5384182b068e3cd06217027b2030717efdbf4b6df76a3] ...
	I0229 19:07:50.449484   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9076d6488b1c81a82f5384182b068e3cd06217027b2030717efdbf4b6df76a3"
	I0229 19:07:50.503887   48088 logs.go:123] Gathering logs for storage-provisioner [dd100a6a78ff38195f8c50e042b5378c2cd6e061b69c596fd75b45030c7abb3f] ...
	I0229 19:07:50.503921   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dd100a6a78ff38195f8c50e042b5378c2cd6e061b69c596fd75b45030c7abb3f"
	I0229 19:07:50.545549   48088 logs.go:123] Gathering logs for container status ...
	I0229 19:07:50.545620   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:07:50.607117   48088 logs.go:123] Gathering logs for kubelet ...
	I0229 19:07:50.607144   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:07:50.711241   48088 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:07:50.711302   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 19:07:50.857588   48088 logs.go:123] Gathering logs for kube-apiserver [afb68f5e908cebe734b371147974874028af491c708e5cb5198d1f84d7c1a1ec] ...
	I0229 19:07:50.857622   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 afb68f5e908cebe734b371147974874028af491c708e5cb5198d1f84d7c1a1ec"
	I0229 19:07:50.912908   48088 logs.go:123] Gathering logs for etcd [ea63327422de95ad071cbb5dd8bd32fb4b052a80e014485ef75bf233da53bebf] ...
	I0229 19:07:50.912943   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea63327422de95ad071cbb5dd8bd32fb4b052a80e014485ef75bf233da53bebf"
	I0229 19:07:50.958888   48088 logs.go:123] Gathering logs for coredns [f3783ae6a7523e08992c41219eb196d6d50ec4a3033a63f5ac801053aac04cc3] ...
	I0229 19:07:50.958918   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f3783ae6a7523e08992c41219eb196d6d50ec4a3033a63f5ac801053aac04cc3"
	I0229 19:07:51.008029   48088 logs.go:123] Gathering logs for kube-scheduler [7ad8f5f1b340cedf651a9032d3eb4e30640730f4284611f933e96f4ecf76ecff] ...
	I0229 19:07:51.008059   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7ad8f5f1b340cedf651a9032d3eb4e30640730f4284611f933e96f4ecf76ecff"
	I0229 19:07:51.064227   48088 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:07:51.064262   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:07:53.940284   48088 api_server.go:253] Checking apiserver healthz at https://192.168.39.210:8444/healthz ...
	I0229 19:07:53.945473   48088 api_server.go:279] https://192.168.39.210:8444/healthz returned 200:
	ok
	I0229 19:07:53.946909   48088 api_server.go:141] control plane version: v1.28.4
	I0229 19:07:53.946925   48088 api_server.go:131] duration metric: took 3.968511547s to wait for apiserver health ...
	I0229 19:07:53.946938   48088 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 19:07:53.946958   48088 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:07:53.947009   48088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:07:53.996337   48088 cri.go:89] found id: "afb68f5e908cebe734b371147974874028af491c708e5cb5198d1f84d7c1a1ec"
	I0229 19:07:53.996357   48088 cri.go:89] found id: ""
	I0229 19:07:53.996364   48088 logs.go:276] 1 containers: [afb68f5e908cebe734b371147974874028af491c708e5cb5198d1f84d7c1a1ec]
	I0229 19:07:53.996409   48088 ssh_runner.go:195] Run: which crictl
	I0229 19:07:54.001386   48088 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:07:54.001465   48088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:07:54.051794   48088 cri.go:89] found id: "ea63327422de95ad071cbb5dd8bd32fb4b052a80e014485ef75bf233da53bebf"
	I0229 19:07:54.051814   48088 cri.go:89] found id: ""
	I0229 19:07:54.051821   48088 logs.go:276] 1 containers: [ea63327422de95ad071cbb5dd8bd32fb4b052a80e014485ef75bf233da53bebf]
	I0229 19:07:54.051869   48088 ssh_runner.go:195] Run: which crictl
	I0229 19:07:54.057560   48088 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:07:54.057631   48088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:07:54.110088   48088 cri.go:89] found id: "f3783ae6a7523e08992c41219eb196d6d50ec4a3033a63f5ac801053aac04cc3"
	I0229 19:07:54.110105   48088 cri.go:89] found id: ""
	I0229 19:07:54.110113   48088 logs.go:276] 1 containers: [f3783ae6a7523e08992c41219eb196d6d50ec4a3033a63f5ac801053aac04cc3]
	I0229 19:07:54.110156   48088 ssh_runner.go:195] Run: which crictl
	I0229 19:07:54.115737   48088 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:07:54.115800   48088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:07:54.162820   48088 cri.go:89] found id: "7ad8f5f1b340cedf651a9032d3eb4e30640730f4284611f933e96f4ecf76ecff"
	I0229 19:07:54.162842   48088 cri.go:89] found id: ""
	I0229 19:07:54.162850   48088 logs.go:276] 1 containers: [7ad8f5f1b340cedf651a9032d3eb4e30640730f4284611f933e96f4ecf76ecff]
	I0229 19:07:54.162899   48088 ssh_runner.go:195] Run: which crictl
	I0229 19:07:54.168740   48088 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:07:54.168795   48088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:07:54.210577   48088 cri.go:89] found id: "66a474fccaab4249c196a12339ffecf4b3a4a079d53db11cc6f1b249574da09f"
	I0229 19:07:54.210617   48088 cri.go:89] found id: ""
	I0229 19:07:54.210625   48088 logs.go:276] 1 containers: [66a474fccaab4249c196a12339ffecf4b3a4a079d53db11cc6f1b249574da09f]
	I0229 19:07:54.210673   48088 ssh_runner.go:195] Run: which crictl
	I0229 19:07:54.216266   48088 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:07:54.216317   48088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:07:54.255416   48088 cri.go:89] found id: "f9076d6488b1c81a82f5384182b068e3cd06217027b2030717efdbf4b6df76a3"
	I0229 19:07:54.255442   48088 cri.go:89] found id: ""
	I0229 19:07:54.255451   48088 logs.go:276] 1 containers: [f9076d6488b1c81a82f5384182b068e3cd06217027b2030717efdbf4b6df76a3]
	I0229 19:07:54.255511   48088 ssh_runner.go:195] Run: which crictl
	I0229 19:07:54.260522   48088 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:07:54.260585   48088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:07:54.645279   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:57.144190   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:54.309825   48088 cri.go:89] found id: ""
	I0229 19:07:54.309861   48088 logs.go:276] 0 containers: []
	W0229 19:07:54.309873   48088 logs.go:278] No container was found matching "kindnet"
	I0229 19:07:54.309881   48088 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 19:07:54.309950   48088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 19:07:54.353200   48088 cri.go:89] found id: "dd100a6a78ff38195f8c50e042b5378c2cd6e061b69c596fd75b45030c7abb3f"
	I0229 19:07:54.353219   48088 cri.go:89] found id: ""
	I0229 19:07:54.353225   48088 logs.go:276] 1 containers: [dd100a6a78ff38195f8c50e042b5378c2cd6e061b69c596fd75b45030c7abb3f]
	I0229 19:07:54.353278   48088 ssh_runner.go:195] Run: which crictl
	I0229 19:07:54.357943   48088 logs.go:123] Gathering logs for kubelet ...
	I0229 19:07:54.357965   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:07:54.456867   48088 logs.go:123] Gathering logs for dmesg ...
	I0229 19:07:54.456901   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:07:54.474633   48088 logs.go:123] Gathering logs for kube-apiserver [afb68f5e908cebe734b371147974874028af491c708e5cb5198d1f84d7c1a1ec] ...
	I0229 19:07:54.474659   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 afb68f5e908cebe734b371147974874028af491c708e5cb5198d1f84d7c1a1ec"
	I0229 19:07:54.538218   48088 logs.go:123] Gathering logs for etcd [ea63327422de95ad071cbb5dd8bd32fb4b052a80e014485ef75bf233da53bebf] ...
	I0229 19:07:54.538256   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea63327422de95ad071cbb5dd8bd32fb4b052a80e014485ef75bf233da53bebf"
	I0229 19:07:54.591570   48088 logs.go:123] Gathering logs for container status ...
	I0229 19:07:54.591607   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:07:54.643603   48088 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:07:54.643638   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 19:07:54.787255   48088 logs.go:123] Gathering logs for coredns [f3783ae6a7523e08992c41219eb196d6d50ec4a3033a63f5ac801053aac04cc3] ...
	I0229 19:07:54.787284   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f3783ae6a7523e08992c41219eb196d6d50ec4a3033a63f5ac801053aac04cc3"
	I0229 19:07:54.836816   48088 logs.go:123] Gathering logs for kube-scheduler [7ad8f5f1b340cedf651a9032d3eb4e30640730f4284611f933e96f4ecf76ecff] ...
	I0229 19:07:54.836840   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7ad8f5f1b340cedf651a9032d3eb4e30640730f4284611f933e96f4ecf76ecff"
	I0229 19:07:54.888605   48088 logs.go:123] Gathering logs for kube-proxy [66a474fccaab4249c196a12339ffecf4b3a4a079d53db11cc6f1b249574da09f] ...
	I0229 19:07:54.888635   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66a474fccaab4249c196a12339ffecf4b3a4a079d53db11cc6f1b249574da09f"
	I0229 19:07:54.930913   48088 logs.go:123] Gathering logs for kube-controller-manager [f9076d6488b1c81a82f5384182b068e3cd06217027b2030717efdbf4b6df76a3] ...
	I0229 19:07:54.930942   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9076d6488b1c81a82f5384182b068e3cd06217027b2030717efdbf4b6df76a3"
	I0229 19:07:54.996868   48088 logs.go:123] Gathering logs for storage-provisioner [dd100a6a78ff38195f8c50e042b5378c2cd6e061b69c596fd75b45030c7abb3f] ...
	I0229 19:07:54.996904   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dd100a6a78ff38195f8c50e042b5378c2cd6e061b69c596fd75b45030c7abb3f"
	I0229 19:07:55.038936   48088 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:07:55.038975   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:07:57.896563   48088 system_pods.go:59] 8 kube-system pods found
	I0229 19:07:57.896600   48088 system_pods.go:61] "coredns-5dd5756b68-fmptg" [ac14ccc5-53fb-41c6-b09a-bdb801f91088] Running
	I0229 19:07:57.896607   48088 system_pods.go:61] "etcd-default-k8s-diff-port-153528" [e06d7f20-0cb4-4560-a746-eae5f366e442] Running
	I0229 19:07:57.896612   48088 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-153528" [1611b07c-d0ca-43c4-81ba-fc7c75b64a01] Running
	I0229 19:07:57.896617   48088 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-153528" [15cdd7c0-b9d9-456e-92ad-9c4de12c53df] Running
	I0229 19:07:57.896621   48088 system_pods.go:61] "kube-proxy-bvrxx" [b826c147-0486-405d-95c7-9b029349e27c] Running
	I0229 19:07:57.896625   48088 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-153528" [c08cb0c5-88da-41ea-982a-1a61e3c24107] Running
	I0229 19:07:57.896633   48088 system_pods.go:61] "metrics-server-57f55c9bc5-v95ws" [e3545189-e705-4d6e-bab6-e1eceba83c0f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 19:07:57.896641   48088 system_pods.go:61] "storage-provisioner" [0525367f-c4e1-4d3e-945b-69f408e9fcb0] Running
	I0229 19:07:57.896650   48088 system_pods.go:74] duration metric: took 3.949706328s to wait for pod list to return data ...
	I0229 19:07:57.896661   48088 default_sa.go:34] waiting for default service account to be created ...
	I0229 19:07:57.899954   48088 default_sa.go:45] found service account: "default"
	I0229 19:07:57.899982   48088 default_sa.go:55] duration metric: took 3.312049ms for default service account to be created ...
	I0229 19:07:57.899994   48088 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 19:07:57.906500   48088 system_pods.go:86] 8 kube-system pods found
	I0229 19:07:57.906535   48088 system_pods.go:89] "coredns-5dd5756b68-fmptg" [ac14ccc5-53fb-41c6-b09a-bdb801f91088] Running
	I0229 19:07:57.906545   48088 system_pods.go:89] "etcd-default-k8s-diff-port-153528" [e06d7f20-0cb4-4560-a746-eae5f366e442] Running
	I0229 19:07:57.906552   48088 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-153528" [1611b07c-d0ca-43c4-81ba-fc7c75b64a01] Running
	I0229 19:07:57.906560   48088 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-153528" [15cdd7c0-b9d9-456e-92ad-9c4de12c53df] Running
	I0229 19:07:57.906566   48088 system_pods.go:89] "kube-proxy-bvrxx" [b826c147-0486-405d-95c7-9b029349e27c] Running
	I0229 19:07:57.906572   48088 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-153528" [c08cb0c5-88da-41ea-982a-1a61e3c24107] Running
	I0229 19:07:57.906584   48088 system_pods.go:89] "metrics-server-57f55c9bc5-v95ws" [e3545189-e705-4d6e-bab6-e1eceba83c0f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 19:07:57.906599   48088 system_pods.go:89] "storage-provisioner" [0525367f-c4e1-4d3e-945b-69f408e9fcb0] Running
	I0229 19:07:57.906611   48088 system_pods.go:126] duration metric: took 6.610073ms to wait for k8s-apps to be running ...
	I0229 19:07:57.906624   48088 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 19:07:57.906684   48088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 19:07:57.928757   48088 system_svc.go:56] duration metric: took 22.126375ms WaitForService to wait for kubelet.
	I0229 19:07:57.928784   48088 kubeadm.go:581] duration metric: took 4m15.631847215s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 19:07:57.928802   48088 node_conditions.go:102] verifying NodePressure condition ...
	I0229 19:07:57.932654   48088 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 19:07:57.932673   48088 node_conditions.go:123] node cpu capacity is 2
	I0229 19:07:57.932683   48088 node_conditions.go:105] duration metric: took 3.87689ms to run NodePressure ...
	I0229 19:07:57.932693   48088 start.go:228] waiting for startup goroutines ...
	I0229 19:07:57.932700   48088 start.go:233] waiting for cluster config update ...
	I0229 19:07:57.932711   48088 start.go:242] writing updated cluster config ...
	I0229 19:07:57.932956   48088 ssh_runner.go:195] Run: rm -f paused
	I0229 19:07:57.982872   48088 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0229 19:07:57.984759   48088 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-153528" cluster and "default" namespace by default
	I0229 19:07:59.144395   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:08:01.643273   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:08:04.142449   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:08:06.145652   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:08:08.644566   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:08:11.144108   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:08:13.147164   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:08:15.646715   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:08:18.143168   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:08:20.643045   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:08:22.644969   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:08:23.142859   47515 pod_ready.go:81] duration metric: took 4m0.007407175s waiting for pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace to be "Ready" ...
	E0229 19:08:23.142882   47515 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0229 19:08:23.142892   47515 pod_ready.go:38] duration metric: took 4m1.191734178s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 19:08:23.142918   47515 api_server.go:52] waiting for apiserver process to appear ...
	I0229 19:08:23.142959   47515 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:08:23.143015   47515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:08:23.200836   47515 cri.go:89] found id: "730a369e2636f53e3085f418ca1fe59278c5988e5639018d26088db3e0d29a9a"
	I0229 19:08:23.200855   47515 cri.go:89] found id: "6edf3acff7dee1fec0aa396e614c8d822a2a7e2074e2da8863c3427cda994799"
	I0229 19:08:23.200861   47515 cri.go:89] found id: ""
	I0229 19:08:23.200868   47515 logs.go:276] 2 containers: [730a369e2636f53e3085f418ca1fe59278c5988e5639018d26088db3e0d29a9a 6edf3acff7dee1fec0aa396e614c8d822a2a7e2074e2da8863c3427cda994799]
	I0229 19:08:23.200925   47515 ssh_runner.go:195] Run: which crictl
	I0229 19:08:23.206581   47515 ssh_runner.go:195] Run: which crictl
	I0229 19:08:23.211810   47515 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:08:23.211873   47515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:08:23.257499   47515 cri.go:89] found id: "3e058bfecc2b86d04151528b3bc3ea0c29858a4d4c2af94ff5e8cea26dd9438c"
	I0229 19:08:23.257518   47515 cri.go:89] found id: ""
	I0229 19:08:23.257526   47515 logs.go:276] 1 containers: [3e058bfecc2b86d04151528b3bc3ea0c29858a4d4c2af94ff5e8cea26dd9438c]
	I0229 19:08:23.257568   47515 ssh_runner.go:195] Run: which crictl
	I0229 19:08:23.262794   47515 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:08:23.262858   47515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:08:23.314356   47515 cri.go:89] found id: "d8cab5559badab1abf69f6d38522d0231303c529f2b78b96d073ce6f1117da43"
	I0229 19:08:23.314379   47515 cri.go:89] found id: ""
	I0229 19:08:23.314389   47515 logs.go:276] 1 containers: [d8cab5559badab1abf69f6d38522d0231303c529f2b78b96d073ce6f1117da43]
	I0229 19:08:23.314433   47515 ssh_runner.go:195] Run: which crictl
	I0229 19:08:23.319774   47515 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:08:23.319828   47515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:08:23.363724   47515 cri.go:89] found id: "2c68222b7809e6dde0a01964d3dcac0dd1789c074f0b42457a8d52ddf09a616a"
	I0229 19:08:23.363746   47515 cri.go:89] found id: ""
	I0229 19:08:23.363753   47515 logs.go:276] 1 containers: [2c68222b7809e6dde0a01964d3dcac0dd1789c074f0b42457a8d52ddf09a616a]
	I0229 19:08:23.363798   47515 ssh_runner.go:195] Run: which crictl
	I0229 19:08:23.368994   47515 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:08:23.369044   47515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:08:23.410298   47515 cri.go:89] found id: "ecdd7783c174633c8e5cbf63bbf2bd9a803b88800ec269f503d1974990869365"
	I0229 19:08:23.410317   47515 cri.go:89] found id: ""
	I0229 19:08:23.410323   47515 logs.go:276] 1 containers: [ecdd7783c174633c8e5cbf63bbf2bd9a803b88800ec269f503d1974990869365]
	I0229 19:08:23.410375   47515 ssh_runner.go:195] Run: which crictl
	I0229 19:08:23.416866   47515 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:08:23.416941   47515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:08:23.460286   47515 cri.go:89] found id: "9661e52ccd784fecb3c7d46bee0dd7f089a63bf8729d5990840d79192dc01b35"
	I0229 19:08:23.460313   47515 cri.go:89] found id: ""
	I0229 19:08:23.460323   47515 logs.go:276] 1 containers: [9661e52ccd784fecb3c7d46bee0dd7f089a63bf8729d5990840d79192dc01b35]
	I0229 19:08:23.460378   47515 ssh_runner.go:195] Run: which crictl
	I0229 19:08:23.467279   47515 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:08:23.467343   47515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:08:23.505758   47515 cri.go:89] found id: ""
	I0229 19:08:23.505790   47515 logs.go:276] 0 containers: []
	W0229 19:08:23.505801   47515 logs.go:278] No container was found matching "kindnet"
	I0229 19:08:23.505808   47515 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 19:08:23.505870   47515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 19:08:23.545547   47515 cri.go:89] found id: "c77d304aa104bed3e19645c4702145c26a9b0c8580d21f085e5eac06fa73ca2c"
	I0229 19:08:23.545573   47515 cri.go:89] found id: ""
	I0229 19:08:23.545581   47515 logs.go:276] 1 containers: [c77d304aa104bed3e19645c4702145c26a9b0c8580d21f085e5eac06fa73ca2c]
	I0229 19:08:23.545642   47515 ssh_runner.go:195] Run: which crictl
	I0229 19:08:23.550632   47515 logs.go:123] Gathering logs for kube-apiserver [730a369e2636f53e3085f418ca1fe59278c5988e5639018d26088db3e0d29a9a] ...
	I0229 19:08:23.550652   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 730a369e2636f53e3085f418ca1fe59278c5988e5639018d26088db3e0d29a9a"
	I0229 19:08:23.613033   47515 logs.go:123] Gathering logs for etcd [3e058bfecc2b86d04151528b3bc3ea0c29858a4d4c2af94ff5e8cea26dd9438c] ...
	I0229 19:08:23.613072   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e058bfecc2b86d04151528b3bc3ea0c29858a4d4c2af94ff5e8cea26dd9438c"
	I0229 19:08:23.664593   47515 logs.go:123] Gathering logs for kube-scheduler [2c68222b7809e6dde0a01964d3dcac0dd1789c074f0b42457a8d52ddf09a616a] ...
	I0229 19:08:23.664623   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c68222b7809e6dde0a01964d3dcac0dd1789c074f0b42457a8d52ddf09a616a"
	I0229 19:08:23.723282   47515 logs.go:123] Gathering logs for storage-provisioner [c77d304aa104bed3e19645c4702145c26a9b0c8580d21f085e5eac06fa73ca2c] ...
	I0229 19:08:23.723311   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c77d304aa104bed3e19645c4702145c26a9b0c8580d21f085e5eac06fa73ca2c"
	I0229 19:08:23.764629   47515 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:08:23.764655   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:08:24.254240   47515 logs.go:123] Gathering logs for container status ...
	I0229 19:08:24.254271   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:08:24.321241   47515 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:08:24.321267   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 19:08:24.472841   47515 logs.go:123] Gathering logs for dmesg ...
	I0229 19:08:24.472870   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:08:24.492953   47515 logs.go:123] Gathering logs for kube-apiserver [6edf3acff7dee1fec0aa396e614c8d822a2a7e2074e2da8863c3427cda994799] ...
	I0229 19:08:24.492987   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6edf3acff7dee1fec0aa396e614c8d822a2a7e2074e2da8863c3427cda994799"
	I0229 19:08:24.603910   47515 logs.go:123] Gathering logs for coredns [d8cab5559badab1abf69f6d38522d0231303c529f2b78b96d073ce6f1117da43] ...
	I0229 19:08:24.603952   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8cab5559badab1abf69f6d38522d0231303c529f2b78b96d073ce6f1117da43"
	I0229 19:08:24.651625   47515 logs.go:123] Gathering logs for kube-proxy [ecdd7783c174633c8e5cbf63bbf2bd9a803b88800ec269f503d1974990869365] ...
	I0229 19:08:24.651653   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecdd7783c174633c8e5cbf63bbf2bd9a803b88800ec269f503d1974990869365"
	I0229 19:08:24.693482   47515 logs.go:123] Gathering logs for kube-controller-manager [9661e52ccd784fecb3c7d46bee0dd7f089a63bf8729d5990840d79192dc01b35] ...
	I0229 19:08:24.693508   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9661e52ccd784fecb3c7d46bee0dd7f089a63bf8729d5990840d79192dc01b35"
	I0229 19:08:24.746081   47515 logs.go:123] Gathering logs for kubelet ...
	I0229 19:08:24.746111   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:08:27.342960   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:08:27.361722   47515 api_server.go:72] duration metric: took 4m6.978456788s to wait for apiserver process to appear ...
	I0229 19:08:27.361756   47515 api_server.go:88] waiting for apiserver healthz status ...
	I0229 19:08:27.361795   47515 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:08:27.361850   47515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:08:27.404496   47515 cri.go:89] found id: "730a369e2636f53e3085f418ca1fe59278c5988e5639018d26088db3e0d29a9a"
	I0229 19:08:27.404525   47515 cri.go:89] found id: "6edf3acff7dee1fec0aa396e614c8d822a2a7e2074e2da8863c3427cda994799"
	I0229 19:08:27.404530   47515 cri.go:89] found id: ""
	I0229 19:08:27.404538   47515 logs.go:276] 2 containers: [730a369e2636f53e3085f418ca1fe59278c5988e5639018d26088db3e0d29a9a 6edf3acff7dee1fec0aa396e614c8d822a2a7e2074e2da8863c3427cda994799]
	I0229 19:08:27.404598   47515 ssh_runner.go:195] Run: which crictl
	I0229 19:08:27.409339   47515 ssh_runner.go:195] Run: which crictl
	I0229 19:08:27.413757   47515 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:08:27.413814   47515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:08:27.456993   47515 cri.go:89] found id: "3e058bfecc2b86d04151528b3bc3ea0c29858a4d4c2af94ff5e8cea26dd9438c"
	I0229 19:08:27.457020   47515 cri.go:89] found id: ""
	I0229 19:08:27.457029   47515 logs.go:276] 1 containers: [3e058bfecc2b86d04151528b3bc3ea0c29858a4d4c2af94ff5e8cea26dd9438c]
	I0229 19:08:27.457089   47515 ssh_runner.go:195] Run: which crictl
	I0229 19:08:27.462024   47515 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:08:27.462088   47515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:08:27.506509   47515 cri.go:89] found id: "d8cab5559badab1abf69f6d38522d0231303c529f2b78b96d073ce6f1117da43"
	I0229 19:08:27.506530   47515 cri.go:89] found id: ""
	I0229 19:08:27.506539   47515 logs.go:276] 1 containers: [d8cab5559badab1abf69f6d38522d0231303c529f2b78b96d073ce6f1117da43]
	I0229 19:08:27.506598   47515 ssh_runner.go:195] Run: which crictl
	I0229 19:08:27.511408   47515 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:08:27.511480   47515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:08:27.558522   47515 cri.go:89] found id: "2c68222b7809e6dde0a01964d3dcac0dd1789c074f0b42457a8d52ddf09a616a"
	I0229 19:08:27.558545   47515 cri.go:89] found id: ""
	I0229 19:08:27.558554   47515 logs.go:276] 1 containers: [2c68222b7809e6dde0a01964d3dcac0dd1789c074f0b42457a8d52ddf09a616a]
	I0229 19:08:27.558638   47515 ssh_runner.go:195] Run: which crictl
	I0229 19:08:27.566043   47515 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:08:27.566119   47515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:08:27.613465   47515 cri.go:89] found id: "ecdd7783c174633c8e5cbf63bbf2bd9a803b88800ec269f503d1974990869365"
	I0229 19:08:27.613486   47515 cri.go:89] found id: ""
	I0229 19:08:27.613495   47515 logs.go:276] 1 containers: [ecdd7783c174633c8e5cbf63bbf2bd9a803b88800ec269f503d1974990869365]
	I0229 19:08:27.613556   47515 ssh_runner.go:195] Run: which crictl
	I0229 19:08:27.618347   47515 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:08:27.618412   47515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:08:27.668486   47515 cri.go:89] found id: "9661e52ccd784fecb3c7d46bee0dd7f089a63bf8729d5990840d79192dc01b35"
	I0229 19:08:27.668510   47515 cri.go:89] found id: ""
	I0229 19:08:27.668519   47515 logs.go:276] 1 containers: [9661e52ccd784fecb3c7d46bee0dd7f089a63bf8729d5990840d79192dc01b35]
	I0229 19:08:27.668572   47515 ssh_runner.go:195] Run: which crictl
	I0229 19:08:27.673416   47515 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:08:27.673476   47515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:08:27.718790   47515 cri.go:89] found id: ""
	I0229 19:08:27.718813   47515 logs.go:276] 0 containers: []
	W0229 19:08:27.718824   47515 logs.go:278] No container was found matching "kindnet"
	I0229 19:08:27.718831   47515 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 19:08:27.718888   47515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 19:08:27.766906   47515 cri.go:89] found id: "c77d304aa104bed3e19645c4702145c26a9b0c8580d21f085e5eac06fa73ca2c"
	I0229 19:08:27.766988   47515 cri.go:89] found id: ""
	I0229 19:08:27.767005   47515 logs.go:276] 1 containers: [c77d304aa104bed3e19645c4702145c26a9b0c8580d21f085e5eac06fa73ca2c]
	I0229 19:08:27.767082   47515 ssh_runner.go:195] Run: which crictl
	I0229 19:08:27.772046   47515 logs.go:123] Gathering logs for dmesg ...
	I0229 19:08:27.772073   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:08:27.789085   47515 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:08:27.789118   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 19:08:27.915599   47515 logs.go:123] Gathering logs for kube-apiserver [6edf3acff7dee1fec0aa396e614c8d822a2a7e2074e2da8863c3427cda994799] ...
	I0229 19:08:27.915629   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6edf3acff7dee1fec0aa396e614c8d822a2a7e2074e2da8863c3427cda994799"
	I0229 19:08:28.022219   47515 logs.go:123] Gathering logs for coredns [d8cab5559badab1abf69f6d38522d0231303c529f2b78b96d073ce6f1117da43] ...
	I0229 19:08:28.022253   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8cab5559badab1abf69f6d38522d0231303c529f2b78b96d073ce6f1117da43"
	I0229 19:08:28.068916   47515 logs.go:123] Gathering logs for kube-proxy [ecdd7783c174633c8e5cbf63bbf2bd9a803b88800ec269f503d1974990869365] ...
	I0229 19:08:28.068942   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecdd7783c174633c8e5cbf63bbf2bd9a803b88800ec269f503d1974990869365"
	I0229 19:08:28.116119   47515 logs.go:123] Gathering logs for storage-provisioner [c77d304aa104bed3e19645c4702145c26a9b0c8580d21f085e5eac06fa73ca2c] ...
	I0229 19:08:28.116145   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c77d304aa104bed3e19645c4702145c26a9b0c8580d21f085e5eac06fa73ca2c"
	I0229 19:08:28.158177   47515 logs.go:123] Gathering logs for kubelet ...
	I0229 19:08:28.158206   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:08:28.256419   47515 logs.go:123] Gathering logs for etcd [3e058bfecc2b86d04151528b3bc3ea0c29858a4d4c2af94ff5e8cea26dd9438c] ...
	I0229 19:08:28.256452   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e058bfecc2b86d04151528b3bc3ea0c29858a4d4c2af94ff5e8cea26dd9438c"
	I0229 19:08:28.310964   47515 logs.go:123] Gathering logs for kube-scheduler [2c68222b7809e6dde0a01964d3dcac0dd1789c074f0b42457a8d52ddf09a616a] ...
	I0229 19:08:28.310995   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c68222b7809e6dde0a01964d3dcac0dd1789c074f0b42457a8d52ddf09a616a"
	I0229 19:08:28.366330   47515 logs.go:123] Gathering logs for kube-controller-manager [9661e52ccd784fecb3c7d46bee0dd7f089a63bf8729d5990840d79192dc01b35] ...
	I0229 19:08:28.366361   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9661e52ccd784fecb3c7d46bee0dd7f089a63bf8729d5990840d79192dc01b35"
	I0229 19:08:28.432543   47515 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:08:28.432577   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:08:28.839513   47515 logs.go:123] Gathering logs for container status ...
	I0229 19:08:28.839550   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:08:28.889908   47515 logs.go:123] Gathering logs for kube-apiserver [730a369e2636f53e3085f418ca1fe59278c5988e5639018d26088db3e0d29a9a] ...
	I0229 19:08:28.889935   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 730a369e2636f53e3085f418ca1fe59278c5988e5639018d26088db3e0d29a9a"
	I0229 19:08:31.447297   47515 api_server.go:253] Checking apiserver healthz at https://192.168.50.72:8443/healthz ...
	I0229 19:08:31.456672   47515 api_server.go:279] https://192.168.50.72:8443/healthz returned 200:
	ok
	I0229 19:08:31.457930   47515 api_server.go:141] control plane version: v1.29.0-rc.2
	I0229 19:08:31.457948   47515 api_server.go:131] duration metric: took 4.09618563s to wait for apiserver health ...
	I0229 19:08:31.457955   47515 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 19:08:31.457974   47515 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:08:31.458020   47515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:08:31.507399   47515 cri.go:89] found id: "730a369e2636f53e3085f418ca1fe59278c5988e5639018d26088db3e0d29a9a"
	I0229 19:08:31.507419   47515 cri.go:89] found id: "6edf3acff7dee1fec0aa396e614c8d822a2a7e2074e2da8863c3427cda994799"
	I0229 19:08:31.507424   47515 cri.go:89] found id: ""
	I0229 19:08:31.507433   47515 logs.go:276] 2 containers: [730a369e2636f53e3085f418ca1fe59278c5988e5639018d26088db3e0d29a9a 6edf3acff7dee1fec0aa396e614c8d822a2a7e2074e2da8863c3427cda994799]
	I0229 19:08:31.507493   47515 ssh_runner.go:195] Run: which crictl
	I0229 19:08:31.512606   47515 ssh_runner.go:195] Run: which crictl
	I0229 19:08:31.516990   47515 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:08:31.517059   47515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:08:31.558856   47515 cri.go:89] found id: "3e058bfecc2b86d04151528b3bc3ea0c29858a4d4c2af94ff5e8cea26dd9438c"
	I0229 19:08:31.558878   47515 cri.go:89] found id: ""
	I0229 19:08:31.558886   47515 logs.go:276] 1 containers: [3e058bfecc2b86d04151528b3bc3ea0c29858a4d4c2af94ff5e8cea26dd9438c]
	I0229 19:08:31.558943   47515 ssh_runner.go:195] Run: which crictl
	I0229 19:08:31.564106   47515 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:08:31.564173   47515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:08:31.607870   47515 cri.go:89] found id: "d8cab5559badab1abf69f6d38522d0231303c529f2b78b96d073ce6f1117da43"
	I0229 19:08:31.607891   47515 cri.go:89] found id: ""
	I0229 19:08:31.607901   47515 logs.go:276] 1 containers: [d8cab5559badab1abf69f6d38522d0231303c529f2b78b96d073ce6f1117da43]
	I0229 19:08:31.607963   47515 ssh_runner.go:195] Run: which crictl
	I0229 19:08:31.612655   47515 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:08:31.612706   47515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:08:31.653422   47515 cri.go:89] found id: "2c68222b7809e6dde0a01964d3dcac0dd1789c074f0b42457a8d52ddf09a616a"
	I0229 19:08:31.653442   47515 cri.go:89] found id: ""
	I0229 19:08:31.653455   47515 logs.go:276] 1 containers: [2c68222b7809e6dde0a01964d3dcac0dd1789c074f0b42457a8d52ddf09a616a]
	I0229 19:08:31.653516   47515 ssh_runner.go:195] Run: which crictl
	I0229 19:08:31.659010   47515 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:08:31.659086   47515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:08:31.705187   47515 cri.go:89] found id: "ecdd7783c174633c8e5cbf63bbf2bd9a803b88800ec269f503d1974990869365"
	I0229 19:08:31.705210   47515 cri.go:89] found id: ""
	I0229 19:08:31.705219   47515 logs.go:276] 1 containers: [ecdd7783c174633c8e5cbf63bbf2bd9a803b88800ec269f503d1974990869365]
	I0229 19:08:31.705333   47515 ssh_runner.go:195] Run: which crictl
	I0229 19:08:31.710080   47515 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:08:31.710130   47515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:08:31.752967   47515 cri.go:89] found id: "9661e52ccd784fecb3c7d46bee0dd7f089a63bf8729d5990840d79192dc01b35"
	I0229 19:08:31.752991   47515 cri.go:89] found id: ""
	I0229 19:08:31.753000   47515 logs.go:276] 1 containers: [9661e52ccd784fecb3c7d46bee0dd7f089a63bf8729d5990840d79192dc01b35]
	I0229 19:08:31.753061   47515 ssh_runner.go:195] Run: which crictl
	I0229 19:08:31.757915   47515 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:08:31.757983   47515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:08:31.798767   47515 cri.go:89] found id: ""
	I0229 19:08:31.798794   47515 logs.go:276] 0 containers: []
	W0229 19:08:31.798804   47515 logs.go:278] No container was found matching "kindnet"
	I0229 19:08:31.798812   47515 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 19:08:31.798872   47515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 19:08:31.841051   47515 cri.go:89] found id: "c77d304aa104bed3e19645c4702145c26a9b0c8580d21f085e5eac06fa73ca2c"
	I0229 19:08:31.841071   47515 cri.go:89] found id: ""
	I0229 19:08:31.841078   47515 logs.go:276] 1 containers: [c77d304aa104bed3e19645c4702145c26a9b0c8580d21f085e5eac06fa73ca2c]
	I0229 19:08:31.841133   47515 ssh_runner.go:195] Run: which crictl
	I0229 19:08:31.845698   47515 logs.go:123] Gathering logs for storage-provisioner [c77d304aa104bed3e19645c4702145c26a9b0c8580d21f085e5eac06fa73ca2c] ...
	I0229 19:08:31.845732   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c77d304aa104bed3e19645c4702145c26a9b0c8580d21f085e5eac06fa73ca2c"
	I0229 19:08:31.887190   47515 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:08:31.887218   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:08:32.264861   47515 logs.go:123] Gathering logs for kubelet ...
	I0229 19:08:32.264892   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:08:32.367323   47515 logs.go:123] Gathering logs for kube-apiserver [730a369e2636f53e3085f418ca1fe59278c5988e5639018d26088db3e0d29a9a] ...
	I0229 19:08:32.367364   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 730a369e2636f53e3085f418ca1fe59278c5988e5639018d26088db3e0d29a9a"
	I0229 19:08:32.416687   47515 logs.go:123] Gathering logs for coredns [d8cab5559badab1abf69f6d38522d0231303c529f2b78b96d073ce6f1117da43] ...
	I0229 19:08:32.416714   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8cab5559badab1abf69f6d38522d0231303c529f2b78b96d073ce6f1117da43"
	I0229 19:08:32.458459   47515 logs.go:123] Gathering logs for etcd [3e058bfecc2b86d04151528b3bc3ea0c29858a4d4c2af94ff5e8cea26dd9438c] ...
	I0229 19:08:32.458486   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e058bfecc2b86d04151528b3bc3ea0c29858a4d4c2af94ff5e8cea26dd9438c"
	I0229 19:08:32.502450   47515 logs.go:123] Gathering logs for kube-scheduler [2c68222b7809e6dde0a01964d3dcac0dd1789c074f0b42457a8d52ddf09a616a] ...
	I0229 19:08:32.502476   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c68222b7809e6dde0a01964d3dcac0dd1789c074f0b42457a8d52ddf09a616a"
	I0229 19:08:32.555285   47515 logs.go:123] Gathering logs for kube-proxy [ecdd7783c174633c8e5cbf63bbf2bd9a803b88800ec269f503d1974990869365] ...
	I0229 19:08:32.555311   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecdd7783c174633c8e5cbf63bbf2bd9a803b88800ec269f503d1974990869365"
	I0229 19:08:32.602273   47515 logs.go:123] Gathering logs for kube-controller-manager [9661e52ccd784fecb3c7d46bee0dd7f089a63bf8729d5990840d79192dc01b35] ...
	I0229 19:08:32.602303   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9661e52ccd784fecb3c7d46bee0dd7f089a63bf8729d5990840d79192dc01b35"
	I0229 19:08:32.655346   47515 logs.go:123] Gathering logs for container status ...
	I0229 19:08:32.655373   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:08:32.716233   47515 logs.go:123] Gathering logs for dmesg ...
	I0229 19:08:32.716262   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:08:32.733285   47515 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:08:32.733311   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 19:08:32.854014   47515 logs.go:123] Gathering logs for kube-apiserver [6edf3acff7dee1fec0aa396e614c8d822a2a7e2074e2da8863c3427cda994799] ...
	I0229 19:08:32.854038   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6edf3acff7dee1fec0aa396e614c8d822a2a7e2074e2da8863c3427cda994799"
	I0229 19:08:35.460690   47515 system_pods.go:59] 8 kube-system pods found
	I0229 19:08:35.460717   47515 system_pods.go:61] "coredns-76f75df574-9z6k5" [818ddb56-c41b-4aae-8490-a9559498eecb] Running
	I0229 19:08:35.460721   47515 system_pods.go:61] "etcd-no-preload-247197" [c6da002d-16f1-4063-9614-f07d5ca6fde8] Running
	I0229 19:08:35.460725   47515 system_pods.go:61] "kube-apiserver-no-preload-247197" [4b330572-426b-414f-bc3f-0b6936d52831] Running
	I0229 19:08:35.460728   47515 system_pods.go:61] "kube-controller-manager-no-preload-247197" [e587f362-08db-4542-9a20-c5422f6607cc] Running
	I0229 19:08:35.460731   47515 system_pods.go:61] "kube-proxy-vvkjv" [b5b911d8-c127-4008-a279-5f1cac593457] Running
	I0229 19:08:35.460734   47515 system_pods.go:61] "kube-scheduler-no-preload-247197" [0063db5e-a134-4cd4-b3d9-90b771e141c4] Running
	I0229 19:08:35.460740   47515 system_pods.go:61] "metrics-server-57f55c9bc5-nj5h7" [c53f2987-829e-4bea-8075-16af3a59249f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 19:08:35.460743   47515 system_pods.go:61] "storage-provisioner" [3c361786-e6d8-4cb4-81c3-387677a3bb05] Running
	I0229 19:08:35.460750   47515 system_pods.go:74] duration metric: took 4.002789673s to wait for pod list to return data ...
	I0229 19:08:35.460757   47515 default_sa.go:34] waiting for default service account to be created ...
	I0229 19:08:35.463218   47515 default_sa.go:45] found service account: "default"
	I0229 19:08:35.463248   47515 default_sa.go:55] duration metric: took 2.483102ms for default service account to be created ...
	I0229 19:08:35.463261   47515 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 19:08:35.469351   47515 system_pods.go:86] 8 kube-system pods found
	I0229 19:08:35.469372   47515 system_pods.go:89] "coredns-76f75df574-9z6k5" [818ddb56-c41b-4aae-8490-a9559498eecb] Running
	I0229 19:08:35.469377   47515 system_pods.go:89] "etcd-no-preload-247197" [c6da002d-16f1-4063-9614-f07d5ca6fde8] Running
	I0229 19:08:35.469383   47515 system_pods.go:89] "kube-apiserver-no-preload-247197" [4b330572-426b-414f-bc3f-0b6936d52831] Running
	I0229 19:08:35.469388   47515 system_pods.go:89] "kube-controller-manager-no-preload-247197" [e587f362-08db-4542-9a20-c5422f6607cc] Running
	I0229 19:08:35.469392   47515 system_pods.go:89] "kube-proxy-vvkjv" [b5b911d8-c127-4008-a279-5f1cac593457] Running
	I0229 19:08:35.469396   47515 system_pods.go:89] "kube-scheduler-no-preload-247197" [0063db5e-a134-4cd4-b3d9-90b771e141c4] Running
	I0229 19:08:35.469402   47515 system_pods.go:89] "metrics-server-57f55c9bc5-nj5h7" [c53f2987-829e-4bea-8075-16af3a59249f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 19:08:35.469407   47515 system_pods.go:89] "storage-provisioner" [3c361786-e6d8-4cb4-81c3-387677a3bb05] Running
	I0229 19:08:35.469415   47515 system_pods.go:126] duration metric: took 6.148455ms to wait for k8s-apps to be running ...
	I0229 19:08:35.469422   47515 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 19:08:35.469464   47515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 19:08:35.487453   47515 system_svc.go:56] duration metric: took 18.016016ms WaitForService to wait for kubelet.
	I0229 19:08:35.487485   47515 kubeadm.go:581] duration metric: took 4m15.104218747s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 19:08:35.487509   47515 node_conditions.go:102] verifying NodePressure condition ...
	I0229 19:08:35.490828   47515 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 19:08:35.490844   47515 node_conditions.go:123] node cpu capacity is 2
	I0229 19:08:35.490854   47515 node_conditions.go:105] duration metric: took 3.34147ms to run NodePressure ...
	I0229 19:08:35.490864   47515 start.go:228] waiting for startup goroutines ...
	I0229 19:08:35.490871   47515 start.go:233] waiting for cluster config update ...
	I0229 19:08:35.490881   47515 start.go:242] writing updated cluster config ...
	I0229 19:08:35.491140   47515 ssh_runner.go:195] Run: rm -f paused
	I0229 19:08:35.539922   47515 start.go:601] kubectl: 1.29.2, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0229 19:08:35.542171   47515 out.go:177] * Done! kubectl is now configured to use "no-preload-247197" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Feb 29 19:15:11 old-k8s-version-631080 crio[643]: time="2024-02-29 19:15:11.661192681Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709234111661171687,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:105088,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d9f9bad7-50d9-475b-866e-b6227be02be6 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 19:15:11 old-k8s-version-631080 crio[643]: time="2024-02-29 19:15:11.662234514Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e0c91934-30c7-4ad0-b4c4-04159308d5b7 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:15:11 old-k8s-version-631080 crio[643]: time="2024-02-29 19:15:11.662349738Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e0c91934-30c7-4ad0-b4c4-04159308d5b7 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:15:11 old-k8s-version-631080 crio[643]: time="2024-02-29 19:15:11.662387346Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=e0c91934-30c7-4ad0-b4c4-04159308d5b7 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:15:11 old-k8s-version-631080 crio[643]: time="2024-02-29 19:15:11.698638924Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9354519b-96ba-4e6c-8e45-ac3dba92b4f0 name=/runtime.v1.RuntimeService/Version
	Feb 29 19:15:11 old-k8s-version-631080 crio[643]: time="2024-02-29 19:15:11.698721248Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9354519b-96ba-4e6c-8e45-ac3dba92b4f0 name=/runtime.v1.RuntimeService/Version
	Feb 29 19:15:11 old-k8s-version-631080 crio[643]: time="2024-02-29 19:15:11.700209866Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3fee516e-482a-446b-9e33-ddf62107ba0c name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 19:15:11 old-k8s-version-631080 crio[643]: time="2024-02-29 19:15:11.700696997Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709234111700668849,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:105088,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3fee516e-482a-446b-9e33-ddf62107ba0c name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 19:15:11 old-k8s-version-631080 crio[643]: time="2024-02-29 19:15:11.701418697Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f70de8c4-1a1f-4660-90cb-ff50c1452ac1 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:15:11 old-k8s-version-631080 crio[643]: time="2024-02-29 19:15:11.701536895Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f70de8c4-1a1f-4660-90cb-ff50c1452ac1 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:15:11 old-k8s-version-631080 crio[643]: time="2024-02-29 19:15:11.701575194Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=f70de8c4-1a1f-4660-90cb-ff50c1452ac1 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:15:11 old-k8s-version-631080 crio[643]: time="2024-02-29 19:15:11.743120079Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3bd8e0d9-3d46-4c81-b3bc-c3c5b3e46140 name=/runtime.v1.RuntimeService/Version
	Feb 29 19:15:11 old-k8s-version-631080 crio[643]: time="2024-02-29 19:15:11.743223548Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3bd8e0d9-3d46-4c81-b3bc-c3c5b3e46140 name=/runtime.v1.RuntimeService/Version
	Feb 29 19:15:11 old-k8s-version-631080 crio[643]: time="2024-02-29 19:15:11.744772406Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=485c2de9-b297-4b91-9d4b-a77a111ce368 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 19:15:11 old-k8s-version-631080 crio[643]: time="2024-02-29 19:15:11.745180330Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709234111745156338,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:105088,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=485c2de9-b297-4b91-9d4b-a77a111ce368 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 19:15:11 old-k8s-version-631080 crio[643]: time="2024-02-29 19:15:11.746046109Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9dde3ffb-2e5d-4b1c-ab0f-b8e425a2b920 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:15:11 old-k8s-version-631080 crio[643]: time="2024-02-29 19:15:11.746126213Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9dde3ffb-2e5d-4b1c-ab0f-b8e425a2b920 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:15:11 old-k8s-version-631080 crio[643]: time="2024-02-29 19:15:11.746160527Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=9dde3ffb-2e5d-4b1c-ab0f-b8e425a2b920 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:15:11 old-k8s-version-631080 crio[643]: time="2024-02-29 19:15:11.795189837Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6445e372-f09e-4e21-8358-c22ab28182d4 name=/runtime.v1.RuntimeService/Version
	Feb 29 19:15:11 old-k8s-version-631080 crio[643]: time="2024-02-29 19:15:11.795292764Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6445e372-f09e-4e21-8358-c22ab28182d4 name=/runtime.v1.RuntimeService/Version
	Feb 29 19:15:11 old-k8s-version-631080 crio[643]: time="2024-02-29 19:15:11.796644194Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1bd9bc31-eab7-4e02-9320-06c259f90183 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 19:15:11 old-k8s-version-631080 crio[643]: time="2024-02-29 19:15:11.797065694Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709234111797045170,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:105088,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1bd9bc31-eab7-4e02-9320-06c259f90183 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 19:15:11 old-k8s-version-631080 crio[643]: time="2024-02-29 19:15:11.797803156Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=acef3dc8-6e9e-4e5d-9fc3-e95d19e7a63a name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:15:11 old-k8s-version-631080 crio[643]: time="2024-02-29 19:15:11.797906565Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=acef3dc8-6e9e-4e5d-9fc3-e95d19e7a63a name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:15:11 old-k8s-version-631080 crio[643]: time="2024-02-29 19:15:11.797941007Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=acef3dc8-6e9e-4e5d-9fc3-e95d19e7a63a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Feb29 18:57] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053084] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.047040] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.651606] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.237160] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.709570] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000015] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.273436] systemd-fstab-generator[564]: Ignoring "noauto" option for root device
	[  +0.071452] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.078075] systemd-fstab-generator[576]: Ignoring "noauto" option for root device
	[  +0.234498] systemd-fstab-generator[590]: Ignoring "noauto" option for root device
	[  +0.167610] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.309321] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[Feb29 18:58] systemd-fstab-generator[945]: Ignoring "noauto" option for root device
	[  +0.062684] kauditd_printk_skb: 130 callbacks suppressed
	[Feb29 19:02] systemd-fstab-generator[8056]: Ignoring "noauto" option for root device
	[  +0.069082] kauditd_printk_skb: 21 callbacks suppressed
	[Feb29 19:04] systemd-fstab-generator[9767]: Ignoring "noauto" option for root device
	[  +0.062408] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 19:15:12 up 17 min,  0 users,  load average: 0.00, 0.07, 0.12
	Linux old-k8s-version-631080 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Feb 29 19:15:10 old-k8s-version-631080 kubelet[19117]: F0229 19:15:10.327072   19117 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 29 19:15:10 old-k8s-version-631080 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 29 19:15:10 old-k8s-version-631080 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 29 19:15:11 old-k8s-version-631080 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 881.
	Feb 29 19:15:11 old-k8s-version-631080 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 29 19:15:11 old-k8s-version-631080 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 29 19:15:11 old-k8s-version-631080 kubelet[19129]: I0229 19:15:11.133365   19129 server.go:410] Version: v1.16.0
	Feb 29 19:15:11 old-k8s-version-631080 kubelet[19129]: I0229 19:15:11.133756   19129 plugins.go:100] No cloud provider specified.
	Feb 29 19:15:11 old-k8s-version-631080 kubelet[19129]: I0229 19:15:11.133775   19129 server.go:773] Client rotation is on, will bootstrap in background
	Feb 29 19:15:11 old-k8s-version-631080 kubelet[19129]: I0229 19:15:11.136342   19129 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 29 19:15:11 old-k8s-version-631080 kubelet[19129]: W0229 19:15:11.137415   19129 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 29 19:15:11 old-k8s-version-631080 kubelet[19129]: F0229 19:15:11.137612   19129 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 29 19:15:11 old-k8s-version-631080 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 29 19:15:11 old-k8s-version-631080 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 29 19:15:11 old-k8s-version-631080 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 882.
	Feb 29 19:15:11 old-k8s-version-631080 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 29 19:15:11 old-k8s-version-631080 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 29 19:15:11 old-k8s-version-631080 kubelet[19179]: I0229 19:15:11.855853   19179 server.go:410] Version: v1.16.0
	Feb 29 19:15:11 old-k8s-version-631080 kubelet[19179]: I0229 19:15:11.856027   19179 plugins.go:100] No cloud provider specified.
	Feb 29 19:15:11 old-k8s-version-631080 kubelet[19179]: I0229 19:15:11.856037   19179 server.go:773] Client rotation is on, will bootstrap in background
	Feb 29 19:15:11 old-k8s-version-631080 kubelet[19179]: I0229 19:15:11.858102   19179 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 29 19:15:11 old-k8s-version-631080 kubelet[19179]: W0229 19:15:11.860337   19179 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 29 19:15:11 old-k8s-version-631080 kubelet[19179]: F0229 19:15:11.860426   19179 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 29 19:15:11 old-k8s-version-631080 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 29 19:15:11 old-k8s-version-631080 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-631080 -n old-k8s-version-631080
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-631080 -n old-k8s-version-631080: exit status 2 (251.643036ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-631080" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.84s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.52s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-991128 -n embed-certs-991128
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-02-29 19:16:14.473213883 +0000 UTC m=+5931.968197656
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-991128 -n embed-certs-991128
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-991128 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-991128 logs -n 25: (2.190896481s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p kubernetes-upgrade-541086                           | kubernetes-upgrade-541086    | jenkins | v1.32.0 | 29 Feb 24 18:47 UTC | 29 Feb 24 18:47 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-541086                           | kubernetes-upgrade-541086    | jenkins | v1.32.0 | 29 Feb 24 18:47 UTC | 29 Feb 24 18:47 UTC |
	| start   | -p no-preload-247197                                   | no-preload-247197            | jenkins | v1.32.0 | 29 Feb 24 18:47 UTC | 29 Feb 24 18:49 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p pause-848791                                        | pause-848791                 | jenkins | v1.32.0 | 29 Feb 24 18:48 UTC | 29 Feb 24 18:48 UTC |
	| start   | -p embed-certs-991128                                  | embed-certs-991128           | jenkins | v1.32.0 | 29 Feb 24 18:48 UTC | 29 Feb 24 18:49 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-393248                              | cert-expiration-393248       | jenkins | v1.32.0 | 29 Feb 24 18:49 UTC | 29 Feb 24 18:49 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-393248                              | cert-expiration-393248       | jenkins | v1.32.0 | 29 Feb 24 18:49 UTC | 29 Feb 24 18:49 UTC |
	| delete  | -p                                                     | disable-driver-mounts-599421 | jenkins | v1.32.0 | 29 Feb 24 18:49 UTC | 29 Feb 24 18:49 UTC |
	|         | disable-driver-mounts-599421                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-153528 | jenkins | v1.32.0 | 29 Feb 24 18:49 UTC | 29 Feb 24 18:50 UTC |
	|         | default-k8s-diff-port-153528                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-247197             | no-preload-247197            | jenkins | v1.32.0 | 29 Feb 24 18:49 UTC | 29 Feb 24 18:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-247197                                   | no-preload-247197            | jenkins | v1.32.0 | 29 Feb 24 18:49 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-991128            | embed-certs-991128           | jenkins | v1.32.0 | 29 Feb 24 18:50 UTC | 29 Feb 24 18:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-991128                                  | embed-certs-991128           | jenkins | v1.32.0 | 29 Feb 24 18:50 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-153528  | default-k8s-diff-port-153528 | jenkins | v1.32.0 | 29 Feb 24 18:51 UTC | 29 Feb 24 18:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-153528 | jenkins | v1.32.0 | 29 Feb 24 18:51 UTC |                     |
	|         | default-k8s-diff-port-153528                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-631080        | old-k8s-version-631080       | jenkins | v1.32.0 | 29 Feb 24 18:51 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-247197                  | no-preload-247197            | jenkins | v1.32.0 | 29 Feb 24 18:52 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-991128                 | embed-certs-991128           | jenkins | v1.32.0 | 29 Feb 24 18:52 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-247197                                   | no-preload-247197            | jenkins | v1.32.0 | 29 Feb 24 18:52 UTC | 29 Feb 24 19:08 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| start   | -p embed-certs-991128                                  | embed-certs-991128           | jenkins | v1.32.0 | 29 Feb 24 18:52 UTC | 29 Feb 24 19:07 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-631080                              | old-k8s-version-631080       | jenkins | v1.32.0 | 29 Feb 24 18:53 UTC | 29 Feb 24 18:53 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-631080             | old-k8s-version-631080       | jenkins | v1.32.0 | 29 Feb 24 18:53 UTC | 29 Feb 24 18:53 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-631080                              | old-k8s-version-631080       | jenkins | v1.32.0 | 29 Feb 24 18:53 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-153528       | default-k8s-diff-port-153528 | jenkins | v1.32.0 | 29 Feb 24 18:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-153528 | jenkins | v1.32.0 | 29 Feb 24 18:53 UTC | 29 Feb 24 19:07 UTC |
	|         | default-k8s-diff-port-153528                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 18:53:39
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 18:53:39.272407   48088 out.go:291] Setting OutFile to fd 1 ...
	I0229 18:53:39.272662   48088 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:53:39.272672   48088 out.go:304] Setting ErrFile to fd 2...
	I0229 18:53:39.272676   48088 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:53:39.272900   48088 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6428/.minikube/bin
	I0229 18:53:39.273517   48088 out.go:298] Setting JSON to false
	I0229 18:53:39.274405   48088 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5763,"bootTime":1709227056,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 18:53:39.274466   48088 start.go:139] virtualization: kvm guest
	I0229 18:53:39.276633   48088 out.go:177] * [default-k8s-diff-port-153528] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 18:53:39.278195   48088 out.go:177]   - MINIKUBE_LOCATION=18259
	I0229 18:53:39.278144   48088 notify.go:220] Checking for updates...
	I0229 18:53:39.280040   48088 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 18:53:39.281568   48088 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18259-6428/kubeconfig
	I0229 18:53:39.282972   48088 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6428/.minikube
	I0229 18:53:39.284383   48088 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 18:53:39.285858   48088 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 18:53:39.287467   48088 config.go:182] Loaded profile config "default-k8s-diff-port-153528": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 18:53:39.287851   48088 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 18:53:39.287889   48088 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:53:39.302503   48088 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39523
	I0229 18:53:39.302895   48088 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:53:39.303402   48088 main.go:141] libmachine: Using API Version  1
	I0229 18:53:39.303427   48088 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:53:39.303737   48088 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:53:39.303893   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .DriverName
	I0229 18:53:39.304118   48088 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 18:53:39.304507   48088 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 18:53:39.304554   48088 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:53:39.318572   48088 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41347
	I0229 18:53:39.318978   48088 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:53:39.319454   48088 main.go:141] libmachine: Using API Version  1
	I0229 18:53:39.319482   48088 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:53:39.319748   48088 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:53:39.319924   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .DriverName
	I0229 18:53:39.351526   48088 out.go:177] * Using the kvm2 driver based on existing profile
	I0229 18:53:39.352970   48088 start.go:299] selected driver: kvm2
	I0229 18:53:39.352988   48088 start.go:903] validating driver "kvm2" against &{Name:default-k8s-diff-port-153528 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-153528 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.210 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:53:39.353115   48088 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 18:53:39.353788   48088 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:53:39.353869   48088 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18259-6428/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 18:53:39.369184   48088 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 18:53:39.369569   48088 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0229 18:53:39.369647   48088 cni.go:84] Creating CNI manager for ""
	I0229 18:53:39.369664   48088 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 18:53:39.369679   48088 start_flags.go:323] config:
	{Name:default-k8s-diff-port-153528 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-15352
8 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.210 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:53:39.369878   48088 iso.go:125] acquiring lock: {Name:mk4b55cee5696fff78a1389276e4e011ad655e56 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:53:39.372634   48088 out.go:177] * Starting control plane node default-k8s-diff-port-153528 in cluster default-k8s-diff-port-153528
	I0229 18:53:41.043270   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:53:39.373930   48088 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0229 18:53:39.373998   48088 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0229 18:53:39.374011   48088 cache.go:56] Caching tarball of preloaded images
	I0229 18:53:39.374104   48088 preload.go:174] Found /home/jenkins/minikube-integration/18259-6428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0229 18:53:39.374116   48088 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0229 18:53:39.374227   48088 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/default-k8s-diff-port-153528/config.json ...
	I0229 18:53:39.374456   48088 start.go:365] acquiring machines lock for default-k8s-diff-port-153528: {Name:mk08b242c6b6b7cd4a6843a7d3821a8b435540dd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 18:53:44.115305   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:53:50.195317   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:53:53.267316   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:53:59.347225   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:54:02.419258   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:54:08.499302   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:54:11.571267   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:54:17.651296   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:54:20.723290   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:54:26.803304   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:54:29.875293   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:54:35.955253   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:54:39.027319   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:54:45.107197   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:54:48.179318   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:54:54.259261   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:54:57.331310   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:55:03.411271   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:55:06.483320   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:55:12.563270   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:55:15.635250   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:55:21.715338   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:55:24.787238   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:55:30.867305   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:55:33.939296   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:55:40.019217   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:55:43.091236   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:55:49.171281   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:55:52.243241   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:55:58.323315   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:56:01.395368   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:56:07.475286   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:56:10.547288   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:56:16.627301   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:56:19.699291   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:56:25.779304   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:56:28.851346   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:56:34.931303   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:56:38.003301   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:56:44.083295   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:56:47.155306   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:56:53.235287   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:56:56.307311   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:57:02.387296   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:57:05.391079   47608 start.go:369] acquired machines lock for "embed-certs-991128" in 4m30.01926313s
	I0229 18:57:05.391125   47608 start.go:96] Skipping create...Using existing machine configuration
	I0229 18:57:05.391130   47608 fix.go:54] fixHost starting: 
	I0229 18:57:05.391473   47608 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 18:57:05.391502   47608 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:57:05.406385   47608 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38019
	I0229 18:57:05.406855   47608 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:57:05.407342   47608 main.go:141] libmachine: Using API Version  1
	I0229 18:57:05.407366   47608 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:57:05.407730   47608 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:57:05.407939   47608 main.go:141] libmachine: (embed-certs-991128) Calling .DriverName
	I0229 18:57:05.408088   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetState
	I0229 18:57:05.409862   47608 fix.go:102] recreateIfNeeded on embed-certs-991128: state=Stopped err=<nil>
	I0229 18:57:05.409895   47608 main.go:141] libmachine: (embed-certs-991128) Calling .DriverName
	W0229 18:57:05.410005   47608 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 18:57:05.411812   47608 out.go:177] * Restarting existing kvm2 VM for "embed-certs-991128" ...
	I0229 18:57:05.389096   47515 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 18:57:05.389139   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHHostname
	I0229 18:57:05.390953   47515 machine.go:91] provisioned docker machine in 4m37.390712428s
	I0229 18:57:05.390991   47515 fix.go:56] fixHost completed within 4m37.410903519s
	I0229 18:57:05.390997   47515 start.go:83] releasing machines lock for "no-preload-247197", held for 4m37.410926595s
	W0229 18:57:05.391017   47515 start.go:694] error starting host: provision: host is not running
	W0229 18:57:05.391155   47515 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0229 18:57:05.391169   47515 start.go:709] Will try again in 5 seconds ...
	I0229 18:57:05.413295   47608 main.go:141] libmachine: (embed-certs-991128) Calling .Start
	I0229 18:57:05.413478   47608 main.go:141] libmachine: (embed-certs-991128) Ensuring networks are active...
	I0229 18:57:05.414184   47608 main.go:141] libmachine: (embed-certs-991128) Ensuring network default is active
	I0229 18:57:05.414495   47608 main.go:141] libmachine: (embed-certs-991128) Ensuring network mk-embed-certs-991128 is active
	I0229 18:57:05.414834   47608 main.go:141] libmachine: (embed-certs-991128) Getting domain xml...
	I0229 18:57:05.415508   47608 main.go:141] libmachine: (embed-certs-991128) Creating domain...
	I0229 18:57:06.606675   47608 main.go:141] libmachine: (embed-certs-991128) Waiting to get IP...
	I0229 18:57:06.607445   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:06.607771   47608 main.go:141] libmachine: (embed-certs-991128) DBG | unable to find current IP address of domain embed-certs-991128 in network mk-embed-certs-991128
	I0229 18:57:06.607826   47608 main.go:141] libmachine: (embed-certs-991128) DBG | I0229 18:57:06.607762   48607 retry.go:31] will retry after 250.745087ms: waiting for machine to come up
	I0229 18:57:06.860293   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:06.860711   47608 main.go:141] libmachine: (embed-certs-991128) DBG | unable to find current IP address of domain embed-certs-991128 in network mk-embed-certs-991128
	I0229 18:57:06.860738   47608 main.go:141] libmachine: (embed-certs-991128) DBG | I0229 18:57:06.860671   48607 retry.go:31] will retry after 259.096096ms: waiting for machine to come up
	I0229 18:57:07.121033   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:07.121429   47608 main.go:141] libmachine: (embed-certs-991128) DBG | unable to find current IP address of domain embed-certs-991128 in network mk-embed-certs-991128
	I0229 18:57:07.121458   47608 main.go:141] libmachine: (embed-certs-991128) DBG | I0229 18:57:07.121381   48607 retry.go:31] will retry after 318.126905ms: waiting for machine to come up
	I0229 18:57:07.440859   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:07.441299   47608 main.go:141] libmachine: (embed-certs-991128) DBG | unable to find current IP address of domain embed-certs-991128 in network mk-embed-certs-991128
	I0229 18:57:07.441328   47608 main.go:141] libmachine: (embed-certs-991128) DBG | I0229 18:57:07.441243   48607 retry.go:31] will retry after 570.321317ms: waiting for machine to come up
	I0229 18:57:08.012896   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:08.013331   47608 main.go:141] libmachine: (embed-certs-991128) DBG | unable to find current IP address of domain embed-certs-991128 in network mk-embed-certs-991128
	I0229 18:57:08.013367   47608 main.go:141] libmachine: (embed-certs-991128) DBG | I0229 18:57:08.013295   48607 retry.go:31] will retry after 489.540139ms: waiting for machine to come up
	I0229 18:57:08.503916   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:08.504321   47608 main.go:141] libmachine: (embed-certs-991128) DBG | unable to find current IP address of domain embed-certs-991128 in network mk-embed-certs-991128
	I0229 18:57:08.504358   47608 main.go:141] libmachine: (embed-certs-991128) DBG | I0229 18:57:08.504269   48607 retry.go:31] will retry after 929.011093ms: waiting for machine to come up
	I0229 18:57:09.435395   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:09.435803   47608 main.go:141] libmachine: (embed-certs-991128) DBG | unable to find current IP address of domain embed-certs-991128 in network mk-embed-certs-991128
	I0229 18:57:09.435851   47608 main.go:141] libmachine: (embed-certs-991128) DBG | I0229 18:57:09.435761   48607 retry.go:31] will retry after 1.087849565s: waiting for machine to come up
	I0229 18:57:10.391806   47515 start.go:365] acquiring machines lock for no-preload-247197: {Name:mk08b242c6b6b7cd4a6843a7d3821a8b435540dd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 18:57:10.525247   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:10.525663   47608 main.go:141] libmachine: (embed-certs-991128) DBG | unable to find current IP address of domain embed-certs-991128 in network mk-embed-certs-991128
	I0229 18:57:10.525697   47608 main.go:141] libmachine: (embed-certs-991128) DBG | I0229 18:57:10.525612   48607 retry.go:31] will retry after 954.10405ms: waiting for machine to come up
	I0229 18:57:11.481162   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:11.481610   47608 main.go:141] libmachine: (embed-certs-991128) DBG | unable to find current IP address of domain embed-certs-991128 in network mk-embed-certs-991128
	I0229 18:57:11.481640   47608 main.go:141] libmachine: (embed-certs-991128) DBG | I0229 18:57:11.481558   48607 retry.go:31] will retry after 1.495484693s: waiting for machine to come up
	I0229 18:57:12.979123   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:12.979547   47608 main.go:141] libmachine: (embed-certs-991128) DBG | unable to find current IP address of domain embed-certs-991128 in network mk-embed-certs-991128
	I0229 18:57:12.979572   47608 main.go:141] libmachine: (embed-certs-991128) DBG | I0229 18:57:12.979499   48607 retry.go:31] will retry after 2.307927756s: waiting for machine to come up
	I0229 18:57:15.288445   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:15.288841   47608 main.go:141] libmachine: (embed-certs-991128) DBG | unable to find current IP address of domain embed-certs-991128 in network mk-embed-certs-991128
	I0229 18:57:15.288871   47608 main.go:141] libmachine: (embed-certs-991128) DBG | I0229 18:57:15.288785   48607 retry.go:31] will retry after 2.89615753s: waiting for machine to come up
	I0229 18:57:18.188102   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:18.188474   47608 main.go:141] libmachine: (embed-certs-991128) DBG | unable to find current IP address of domain embed-certs-991128 in network mk-embed-certs-991128
	I0229 18:57:18.188504   47608 main.go:141] libmachine: (embed-certs-991128) DBG | I0229 18:57:18.188426   48607 retry.go:31] will retry after 3.511036368s: waiting for machine to come up
	I0229 18:57:21.701039   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:21.701395   47608 main.go:141] libmachine: (embed-certs-991128) DBG | unable to find current IP address of domain embed-certs-991128 in network mk-embed-certs-991128
	I0229 18:57:21.701425   47608 main.go:141] libmachine: (embed-certs-991128) DBG | I0229 18:57:21.701356   48607 retry.go:31] will retry after 3.516537008s: waiting for machine to come up
	I0229 18:57:25.220199   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:25.220641   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has current primary IP address 192.168.61.34 and MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:25.220655   47608 main.go:141] libmachine: (embed-certs-991128) Found IP for machine: 192.168.61.34
	I0229 18:57:25.220663   47608 main.go:141] libmachine: (embed-certs-991128) Reserving static IP address...
	I0229 18:57:25.221122   47608 main.go:141] libmachine: (embed-certs-991128) Reserved static IP address: 192.168.61.34
	I0229 18:57:25.221162   47608 main.go:141] libmachine: (embed-certs-991128) DBG | found host DHCP lease matching {name: "embed-certs-991128", mac: "52:54:00:44:76:e2", ip: "192.168.61.34"} in network mk-embed-certs-991128: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:29 +0000 UTC Type:0 Mac:52:54:00:44:76:e2 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:embed-certs-991128 Clientid:01:52:54:00:44:76:e2}
	I0229 18:57:25.221179   47608 main.go:141] libmachine: (embed-certs-991128) Waiting for SSH to be available...
	I0229 18:57:25.221222   47608 main.go:141] libmachine: (embed-certs-991128) DBG | skip adding static IP to network mk-embed-certs-991128 - found existing host DHCP lease matching {name: "embed-certs-991128", mac: "52:54:00:44:76:e2", ip: "192.168.61.34"}
	I0229 18:57:25.221243   47608 main.go:141] libmachine: (embed-certs-991128) DBG | Getting to WaitForSSH function...
	I0229 18:57:25.223450   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:25.223775   47608 main.go:141] libmachine: (embed-certs-991128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:76:e2", ip: ""} in network mk-embed-certs-991128: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:29 +0000 UTC Type:0 Mac:52:54:00:44:76:e2 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:embed-certs-991128 Clientid:01:52:54:00:44:76:e2}
	I0229 18:57:25.223809   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined IP address 192.168.61.34 and MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:25.223951   47608 main.go:141] libmachine: (embed-certs-991128) DBG | Using SSH client type: external
	I0229 18:57:25.223981   47608 main.go:141] libmachine: (embed-certs-991128) DBG | Using SSH private key: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/embed-certs-991128/id_rsa (-rw-------)
	I0229 18:57:25.224014   47608 main.go:141] libmachine: (embed-certs-991128) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.34 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18259-6428/.minikube/machines/embed-certs-991128/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 18:57:25.224032   47608 main.go:141] libmachine: (embed-certs-991128) DBG | About to run SSH command:
	I0229 18:57:25.224052   47608 main.go:141] libmachine: (embed-certs-991128) DBG | exit 0
	I0229 18:57:26.464131   47919 start.go:369] acquired machines lock for "old-k8s-version-631080" in 4m11.42071391s
	I0229 18:57:26.464193   47919 start.go:96] Skipping create...Using existing machine configuration
	I0229 18:57:26.464200   47919 fix.go:54] fixHost starting: 
	I0229 18:57:26.464621   47919 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 18:57:26.464657   47919 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:57:26.480155   47919 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46439
	I0229 18:57:26.480488   47919 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:57:26.481000   47919 main.go:141] libmachine: Using API Version  1
	I0229 18:57:26.481027   47919 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:57:26.481327   47919 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:57:26.481514   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .DriverName
	I0229 18:57:26.481669   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetState
	I0229 18:57:26.482869   47919 fix.go:102] recreateIfNeeded on old-k8s-version-631080: state=Stopped err=<nil>
	I0229 18:57:26.482885   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .DriverName
	W0229 18:57:26.483052   47919 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 18:57:26.485421   47919 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-631080" ...
	I0229 18:57:25.351081   47608 main.go:141] libmachine: (embed-certs-991128) DBG | SSH cmd err, output: <nil>: 
	I0229 18:57:25.351434   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetConfigRaw
	I0229 18:57:25.352022   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetIP
	I0229 18:57:25.354349   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:25.354705   47608 main.go:141] libmachine: (embed-certs-991128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:76:e2", ip: ""} in network mk-embed-certs-991128: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:29 +0000 UTC Type:0 Mac:52:54:00:44:76:e2 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:embed-certs-991128 Clientid:01:52:54:00:44:76:e2}
	I0229 18:57:25.354734   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined IP address 192.168.61.34 and MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:25.354944   47608 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/embed-certs-991128/config.json ...
	I0229 18:57:25.355150   47608 machine.go:88] provisioning docker machine ...
	I0229 18:57:25.355169   47608 main.go:141] libmachine: (embed-certs-991128) Calling .DriverName
	I0229 18:57:25.355351   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetMachineName
	I0229 18:57:25.355501   47608 buildroot.go:166] provisioning hostname "embed-certs-991128"
	I0229 18:57:25.355528   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetMachineName
	I0229 18:57:25.355763   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHHostname
	I0229 18:57:25.357784   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:25.358109   47608 main.go:141] libmachine: (embed-certs-991128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:76:e2", ip: ""} in network mk-embed-certs-991128: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:29 +0000 UTC Type:0 Mac:52:54:00:44:76:e2 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:embed-certs-991128 Clientid:01:52:54:00:44:76:e2}
	I0229 18:57:25.358134   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined IP address 192.168.61.34 and MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:25.358265   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHPort
	I0229 18:57:25.358429   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHKeyPath
	I0229 18:57:25.358567   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHKeyPath
	I0229 18:57:25.358683   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHUsername
	I0229 18:57:25.358840   47608 main.go:141] libmachine: Using SSH client type: native
	I0229 18:57:25.359062   47608 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.34 22 <nil> <nil>}
	I0229 18:57:25.359078   47608 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-991128 && echo "embed-certs-991128" | sudo tee /etc/hostname
	I0229 18:57:25.487161   47608 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-991128
	
	I0229 18:57:25.487197   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHHostname
	I0229 18:57:25.489979   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:25.490275   47608 main.go:141] libmachine: (embed-certs-991128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:76:e2", ip: ""} in network mk-embed-certs-991128: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:29 +0000 UTC Type:0 Mac:52:54:00:44:76:e2 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:embed-certs-991128 Clientid:01:52:54:00:44:76:e2}
	I0229 18:57:25.490308   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined IP address 192.168.61.34 and MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:25.490539   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHPort
	I0229 18:57:25.490755   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHKeyPath
	I0229 18:57:25.490908   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHKeyPath
	I0229 18:57:25.491047   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHUsername
	I0229 18:57:25.491191   47608 main.go:141] libmachine: Using SSH client type: native
	I0229 18:57:25.491377   47608 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.34 22 <nil> <nil>}
	I0229 18:57:25.491405   47608 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-991128' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-991128/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-991128' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 18:57:25.617911   47608 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 18:57:25.617941   47608 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18259-6428/.minikube CaCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18259-6428/.minikube}
	I0229 18:57:25.617961   47608 buildroot.go:174] setting up certificates
	I0229 18:57:25.617971   47608 provision.go:83] configureAuth start
	I0229 18:57:25.617980   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetMachineName
	I0229 18:57:25.618235   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetIP
	I0229 18:57:25.620943   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:25.621286   47608 main.go:141] libmachine: (embed-certs-991128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:76:e2", ip: ""} in network mk-embed-certs-991128: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:29 +0000 UTC Type:0 Mac:52:54:00:44:76:e2 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:embed-certs-991128 Clientid:01:52:54:00:44:76:e2}
	I0229 18:57:25.621318   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined IP address 192.168.61.34 and MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:25.621460   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHHostname
	I0229 18:57:25.623629   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:25.623936   47608 main.go:141] libmachine: (embed-certs-991128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:76:e2", ip: ""} in network mk-embed-certs-991128: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:29 +0000 UTC Type:0 Mac:52:54:00:44:76:e2 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:embed-certs-991128 Clientid:01:52:54:00:44:76:e2}
	I0229 18:57:25.623961   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined IP address 192.168.61.34 and MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:25.624074   47608 provision.go:138] copyHostCerts
	I0229 18:57:25.624133   47608 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem, removing ...
	I0229 18:57:25.624154   47608 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem
	I0229 18:57:25.624240   47608 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem (1082 bytes)
	I0229 18:57:25.624344   47608 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem, removing ...
	I0229 18:57:25.624355   47608 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem
	I0229 18:57:25.624383   47608 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem (1123 bytes)
	I0229 18:57:25.624455   47608 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem, removing ...
	I0229 18:57:25.624462   47608 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem
	I0229 18:57:25.624483   47608 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem (1675 bytes)
	I0229 18:57:25.624538   47608 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem org=jenkins.embed-certs-991128 san=[192.168.61.34 192.168.61.34 localhost 127.0.0.1 minikube embed-certs-991128]
	I0229 18:57:25.757225   47608 provision.go:172] copyRemoteCerts
	I0229 18:57:25.757278   47608 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 18:57:25.757301   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHHostname
	I0229 18:57:25.759794   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:25.760098   47608 main.go:141] libmachine: (embed-certs-991128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:76:e2", ip: ""} in network mk-embed-certs-991128: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:29 +0000 UTC Type:0 Mac:52:54:00:44:76:e2 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:embed-certs-991128 Clientid:01:52:54:00:44:76:e2}
	I0229 18:57:25.760125   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined IP address 192.168.61.34 and MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:25.760287   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHPort
	I0229 18:57:25.760488   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHKeyPath
	I0229 18:57:25.760664   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHUsername
	I0229 18:57:25.760798   47608 sshutil.go:53] new ssh client: &{IP:192.168.61.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/embed-certs-991128/id_rsa Username:docker}
	I0229 18:57:25.849527   47608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0229 18:57:25.875673   47608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 18:57:25.902046   47608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0229 18:57:25.927830   47608 provision.go:86] duration metric: configureAuth took 309.850774ms
	I0229 18:57:25.927862   47608 buildroot.go:189] setting minikube options for container-runtime
	I0229 18:57:25.928081   47608 config.go:182] Loaded profile config "embed-certs-991128": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 18:57:25.928163   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHHostname
	I0229 18:57:25.930565   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:25.930917   47608 main.go:141] libmachine: (embed-certs-991128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:76:e2", ip: ""} in network mk-embed-certs-991128: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:29 +0000 UTC Type:0 Mac:52:54:00:44:76:e2 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:embed-certs-991128 Clientid:01:52:54:00:44:76:e2}
	I0229 18:57:25.930945   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined IP address 192.168.61.34 and MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:25.931135   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHPort
	I0229 18:57:25.931336   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHKeyPath
	I0229 18:57:25.931493   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHKeyPath
	I0229 18:57:25.931649   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHUsername
	I0229 18:57:25.931806   47608 main.go:141] libmachine: Using SSH client type: native
	I0229 18:57:25.932003   47608 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.34 22 <nil> <nil>}
	I0229 18:57:25.932026   47608 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0229 18:57:26.205080   47608 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0229 18:57:26.205139   47608 machine.go:91] provisioned docker machine in 849.974413ms
	I0229 18:57:26.205154   47608 start.go:300] post-start starting for "embed-certs-991128" (driver="kvm2")
	I0229 18:57:26.205168   47608 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 18:57:26.205191   47608 main.go:141] libmachine: (embed-certs-991128) Calling .DriverName
	I0229 18:57:26.205537   47608 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 18:57:26.205568   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHHostname
	I0229 18:57:26.208107   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:26.208417   47608 main.go:141] libmachine: (embed-certs-991128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:76:e2", ip: ""} in network mk-embed-certs-991128: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:29 +0000 UTC Type:0 Mac:52:54:00:44:76:e2 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:embed-certs-991128 Clientid:01:52:54:00:44:76:e2}
	I0229 18:57:26.208443   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined IP address 192.168.61.34 and MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:26.208625   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHPort
	I0229 18:57:26.208804   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHKeyPath
	I0229 18:57:26.208975   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHUsername
	I0229 18:57:26.209084   47608 sshutil.go:53] new ssh client: &{IP:192.168.61.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/embed-certs-991128/id_rsa Username:docker}
	I0229 18:57:26.303090   47608 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 18:57:26.309522   47608 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 18:57:26.309543   47608 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6428/.minikube/addons for local assets ...
	I0229 18:57:26.309609   47608 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6428/.minikube/files for local assets ...
	I0229 18:57:26.309697   47608 filesync.go:149] local asset: /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem -> 136512.pem in /etc/ssl/certs
	I0229 18:57:26.309800   47608 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 18:57:26.319897   47608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem --> /etc/ssl/certs/136512.pem (1708 bytes)
	I0229 18:57:26.346220   47608 start.go:303] post-start completed in 141.055399ms
	I0229 18:57:26.346242   47608 fix.go:56] fixHost completed within 20.955110287s
	I0229 18:57:26.346265   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHHostname
	I0229 18:57:26.348878   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:26.349237   47608 main.go:141] libmachine: (embed-certs-991128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:76:e2", ip: ""} in network mk-embed-certs-991128: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:29 +0000 UTC Type:0 Mac:52:54:00:44:76:e2 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:embed-certs-991128 Clientid:01:52:54:00:44:76:e2}
	I0229 18:57:26.349278   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined IP address 192.168.61.34 and MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:26.349415   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHPort
	I0229 18:57:26.349591   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHKeyPath
	I0229 18:57:26.349742   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHKeyPath
	I0229 18:57:26.349860   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHUsername
	I0229 18:57:26.350032   47608 main.go:141] libmachine: Using SSH client type: native
	I0229 18:57:26.350224   47608 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.34 22 <nil> <nil>}
	I0229 18:57:26.350235   47608 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 18:57:26.463992   47608 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709233046.436502673
	
	I0229 18:57:26.464017   47608 fix.go:206] guest clock: 1709233046.436502673
	I0229 18:57:26.464027   47608 fix.go:219] Guest: 2024-02-29 18:57:26.436502673 +0000 UTC Remote: 2024-02-29 18:57:26.346246091 +0000 UTC m=+291.120011459 (delta=90.256582ms)
	I0229 18:57:26.464055   47608 fix.go:190] guest clock delta is within tolerance: 90.256582ms
	I0229 18:57:26.464062   47608 start.go:83] releasing machines lock for "embed-certs-991128", held for 21.072955529s
	I0229 18:57:26.464099   47608 main.go:141] libmachine: (embed-certs-991128) Calling .DriverName
	I0229 18:57:26.464362   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetIP
	I0229 18:57:26.466954   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:26.467308   47608 main.go:141] libmachine: (embed-certs-991128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:76:e2", ip: ""} in network mk-embed-certs-991128: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:29 +0000 UTC Type:0 Mac:52:54:00:44:76:e2 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:embed-certs-991128 Clientid:01:52:54:00:44:76:e2}
	I0229 18:57:26.467350   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined IP address 192.168.61.34 and MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:26.467452   47608 main.go:141] libmachine: (embed-certs-991128) Calling .DriverName
	I0229 18:57:26.468058   47608 main.go:141] libmachine: (embed-certs-991128) Calling .DriverName
	I0229 18:57:26.468227   47608 main.go:141] libmachine: (embed-certs-991128) Calling .DriverName
	I0229 18:57:26.468287   47608 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 18:57:26.468356   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHHostname
	I0229 18:57:26.468456   47608 ssh_runner.go:195] Run: cat /version.json
	I0229 18:57:26.468477   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHHostname
	I0229 18:57:26.470917   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:26.470996   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:26.471291   47608 main.go:141] libmachine: (embed-certs-991128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:76:e2", ip: ""} in network mk-embed-certs-991128: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:29 +0000 UTC Type:0 Mac:52:54:00:44:76:e2 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:embed-certs-991128 Clientid:01:52:54:00:44:76:e2}
	I0229 18:57:26.471322   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined IP address 192.168.61.34 and MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:26.471352   47608 main.go:141] libmachine: (embed-certs-991128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:76:e2", ip: ""} in network mk-embed-certs-991128: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:29 +0000 UTC Type:0 Mac:52:54:00:44:76:e2 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:embed-certs-991128 Clientid:01:52:54:00:44:76:e2}
	I0229 18:57:26.471369   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined IP address 192.168.61.34 and MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:26.471562   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHPort
	I0229 18:57:26.471602   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHPort
	I0229 18:57:26.471719   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHKeyPath
	I0229 18:57:26.471783   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHKeyPath
	I0229 18:57:26.471873   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHUsername
	I0229 18:57:26.471940   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHUsername
	I0229 18:57:26.472005   47608 sshutil.go:53] new ssh client: &{IP:192.168.61.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/embed-certs-991128/id_rsa Username:docker}
	I0229 18:57:26.472095   47608 sshutil.go:53] new ssh client: &{IP:192.168.61.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/embed-certs-991128/id_rsa Username:docker}
	I0229 18:57:26.560629   47608 ssh_runner.go:195] Run: systemctl --version
	I0229 18:57:26.587852   47608 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0229 18:57:26.752819   47608 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 18:57:26.760557   47608 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 18:57:26.760629   47608 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 18:57:26.778065   47608 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 18:57:26.778096   47608 start.go:475] detecting cgroup driver to use...
	I0229 18:57:26.778156   47608 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 18:57:26.795970   47608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 18:57:26.810591   47608 docker.go:217] disabling cri-docker service (if available) ...
	I0229 18:57:26.810634   47608 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 18:57:26.826715   47608 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 18:57:26.840879   47608 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 18:57:26.959536   47608 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 18:57:27.143802   47608 docker.go:233] disabling docker service ...
	I0229 18:57:27.143856   47608 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 18:57:27.164748   47608 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 18:57:27.183161   47608 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 18:57:27.322659   47608 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 18:57:27.471650   47608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 18:57:27.489290   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 18:57:27.512706   47608 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0229 18:57:27.512770   47608 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:57:27.524596   47608 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0229 18:57:27.524657   47608 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:57:27.536202   47608 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:57:27.547343   47608 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:57:27.558390   47608 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 18:57:27.571297   47608 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 18:57:27.580859   47608 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 18:57:27.580903   47608 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0229 18:57:27.595324   47608 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 18:57:27.606130   47608 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:57:27.736363   47608 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0229 18:57:27.877719   47608 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0229 18:57:27.877804   47608 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0229 18:57:27.882920   47608 start.go:543] Will wait 60s for crictl version
	I0229 18:57:27.883035   47608 ssh_runner.go:195] Run: which crictl
	I0229 18:57:27.887132   47608 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 18:57:27.925964   47608 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0229 18:57:27.926061   47608 ssh_runner.go:195] Run: crio --version
	I0229 18:57:27.958046   47608 ssh_runner.go:195] Run: crio --version
	I0229 18:57:27.991575   47608 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0229 18:57:26.486586   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .Start
	I0229 18:57:26.486734   47919 main.go:141] libmachine: (old-k8s-version-631080) Ensuring networks are active...
	I0229 18:57:26.487377   47919 main.go:141] libmachine: (old-k8s-version-631080) Ensuring network default is active
	I0229 18:57:26.487679   47919 main.go:141] libmachine: (old-k8s-version-631080) Ensuring network mk-old-k8s-version-631080 is active
	I0229 18:57:26.488006   47919 main.go:141] libmachine: (old-k8s-version-631080) Getting domain xml...
	I0229 18:57:26.488624   47919 main.go:141] libmachine: (old-k8s-version-631080) Creating domain...
	I0229 18:57:27.689480   47919 main.go:141] libmachine: (old-k8s-version-631080) Waiting to get IP...
	I0229 18:57:27.690414   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:27.690858   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:57:27.690932   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:57:27.690848   48724 retry.go:31] will retry after 309.860592ms: waiting for machine to come up
	I0229 18:57:28.002437   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:28.002926   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:57:28.002959   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:57:28.002884   48724 retry.go:31] will retry after 298.018759ms: waiting for machine to come up
	I0229 18:57:28.302325   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:28.302849   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:57:28.302879   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:57:28.302801   48724 retry.go:31] will retry after 312.821928ms: waiting for machine to come up
	I0229 18:57:28.617315   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:28.617797   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:57:28.617831   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:57:28.617753   48724 retry.go:31] will retry after 373.960028ms: waiting for machine to come up
	I0229 18:57:28.993230   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:28.993860   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:57:28.993881   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:57:28.993809   48724 retry.go:31] will retry after 516.423282ms: waiting for machine to come up
	I0229 18:57:29.512208   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:29.512683   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:57:29.512718   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:57:29.512651   48724 retry.go:31] will retry after 776.839747ms: waiting for machine to come up
	I0229 18:57:27.992835   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetIP
	I0229 18:57:27.995847   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:27.996225   47608 main.go:141] libmachine: (embed-certs-991128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:76:e2", ip: ""} in network mk-embed-certs-991128: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:29 +0000 UTC Type:0 Mac:52:54:00:44:76:e2 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:embed-certs-991128 Clientid:01:52:54:00:44:76:e2}
	I0229 18:57:27.996255   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined IP address 192.168.61.34 and MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:27.996483   47608 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0229 18:57:28.001148   47608 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:57:28.016232   47608 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0229 18:57:28.016293   47608 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 18:57:28.055181   47608 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0229 18:57:28.055248   47608 ssh_runner.go:195] Run: which lz4
	I0229 18:57:28.059680   47608 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0229 18:57:28.064299   47608 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 18:57:28.064330   47608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0229 18:57:29.988576   47608 crio.go:444] Took 1.928948 seconds to copy over tarball
	I0229 18:57:29.988670   47608 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 18:57:30.290748   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:30.291228   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:57:30.291276   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:57:30.291195   48724 retry.go:31] will retry after 846.002471ms: waiting for machine to come up
	I0229 18:57:31.139734   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:31.140157   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:57:31.140177   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:57:31.140114   48724 retry.go:31] will retry after 1.01688411s: waiting for machine to come up
	I0229 18:57:32.158306   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:32.158845   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:57:32.158868   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:57:32.158827   48724 retry.go:31] will retry after 1.217119434s: waiting for machine to come up
	I0229 18:57:33.377121   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:33.377508   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:57:33.377538   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:57:33.377475   48724 retry.go:31] will retry after 1.566910779s: waiting for machine to come up
	I0229 18:57:32.844311   47608 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.855608287s)
	I0229 18:57:32.844344   47608 crio.go:451] Took 2.855747 seconds to extract the tarball
	I0229 18:57:32.844356   47608 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 18:57:32.890199   47608 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 18:57:32.953328   47608 crio.go:496] all images are preloaded for cri-o runtime.
	I0229 18:57:32.953351   47608 cache_images.go:84] Images are preloaded, skipping loading
	I0229 18:57:32.953408   47608 ssh_runner.go:195] Run: crio config
	I0229 18:57:33.006678   47608 cni.go:84] Creating CNI manager for ""
	I0229 18:57:33.006701   47608 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 18:57:33.006717   47608 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 18:57:33.006734   47608 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.34 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-991128 NodeName:embed-certs-991128 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.34"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.34 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 18:57:33.006872   47608 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.34
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-991128"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.34
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.34"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 18:57:33.006951   47608 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-991128 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.34
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-991128 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 18:57:33.006998   47608 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0229 18:57:33.018746   47608 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 18:57:33.018824   47608 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 18:57:33.029994   47608 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I0229 18:57:33.050522   47608 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 18:57:33.070313   47608 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I0229 18:57:33.091436   47608 ssh_runner.go:195] Run: grep 192.168.61.34	control-plane.minikube.internal$ /etc/hosts
	I0229 18:57:33.096253   47608 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.34	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:57:33.110683   47608 certs.go:56] Setting up /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/embed-certs-991128 for IP: 192.168.61.34
	I0229 18:57:33.110720   47608 certs.go:190] acquiring lock for shared ca certs: {Name:mk3505d2468da66ac20dc3ebf913782cecf1ba0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:57:33.110892   47608 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18259-6428/.minikube/ca.key
	I0229 18:57:33.110957   47608 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.key
	I0229 18:57:33.111075   47608 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/embed-certs-991128/client.key
	I0229 18:57:33.111147   47608 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/embed-certs-991128/apiserver.key.d8cf1313
	I0229 18:57:33.111195   47608 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/embed-certs-991128/proxy-client.key
	I0229 18:57:33.111320   47608 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651.pem (1338 bytes)
	W0229 18:57:33.111352   47608 certs.go:433] ignoring /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651_empty.pem, impossibly tiny 0 bytes
	I0229 18:57:33.111362   47608 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem (1675 bytes)
	I0229 18:57:33.111383   47608 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem (1082 bytes)
	I0229 18:57:33.111406   47608 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem (1123 bytes)
	I0229 18:57:33.111443   47608 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem (1675 bytes)
	I0229 18:57:33.111479   47608 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem (1708 bytes)
	I0229 18:57:33.112071   47608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/embed-certs-991128/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 18:57:33.143498   47608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/embed-certs-991128/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0229 18:57:33.171567   47608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/embed-certs-991128/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 18:57:33.199300   47608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/embed-certs-991128/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0229 18:57:33.226492   47608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 18:57:33.254025   47608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0229 18:57:33.281215   47608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 18:57:33.311188   47608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0229 18:57:33.342138   47608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 18:57:33.373884   47608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651.pem --> /usr/share/ca-certificates/13651.pem (1338 bytes)
	I0229 18:57:33.401130   47608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem --> /usr/share/ca-certificates/136512.pem (1708 bytes)
	I0229 18:57:33.427527   47608 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 18:57:33.446246   47608 ssh_runner.go:195] Run: openssl version
	I0229 18:57:33.455476   47608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13651.pem && ln -fs /usr/share/ca-certificates/13651.pem /etc/ssl/certs/13651.pem"
	I0229 18:57:33.473394   47608 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13651.pem
	I0229 18:57:33.478904   47608 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 17:49 /usr/share/ca-certificates/13651.pem
	I0229 18:57:33.478961   47608 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13651.pem
	I0229 18:57:33.485913   47608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13651.pem /etc/ssl/certs/51391683.0"
	I0229 18:57:33.499458   47608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136512.pem && ln -fs /usr/share/ca-certificates/136512.pem /etc/ssl/certs/136512.pem"
	I0229 18:57:33.512861   47608 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136512.pem
	I0229 18:57:33.518749   47608 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 17:49 /usr/share/ca-certificates/136512.pem
	I0229 18:57:33.518808   47608 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136512.pem
	I0229 18:57:33.525612   47608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136512.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 18:57:33.539397   47608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 18:57:33.552302   47608 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:57:33.557481   47608 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 17:40 /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:57:33.557543   47608 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:57:33.564226   47608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 18:57:33.577315   47608 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 18:57:33.582527   47608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0229 18:57:33.589246   47608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0229 18:57:33.595992   47608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0229 18:57:33.602535   47608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0229 18:57:33.609231   47608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0229 18:57:33.616292   47608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0229 18:57:33.623124   47608 kubeadm.go:404] StartCluster: {Name:embed-certs-991128 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-991128 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.34 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:57:33.623239   47608 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0229 18:57:33.623281   47608 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 18:57:33.663871   47608 cri.go:89] found id: ""
	I0229 18:57:33.663948   47608 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 18:57:33.676484   47608 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0229 18:57:33.676519   47608 kubeadm.go:636] restartCluster start
	I0229 18:57:33.676576   47608 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0229 18:57:33.690000   47608 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:33.690903   47608 kubeconfig.go:92] found "embed-certs-991128" server: "https://192.168.61.34:8443"
	I0229 18:57:33.692909   47608 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0229 18:57:33.706062   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:33.706162   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:33.722166   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:34.206285   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:34.206371   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:34.222736   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:34.706286   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:34.706415   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:34.721170   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:35.206815   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:35.206905   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:35.223777   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:34.946027   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:35.171546   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:57:35.171576   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:57:34.946337   48724 retry.go:31] will retry after 2.169140366s: waiting for machine to come up
	I0229 18:57:37.117080   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:37.117531   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:57:37.117564   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:57:37.117491   48724 retry.go:31] will retry after 2.187461538s: waiting for machine to come up
	I0229 18:57:39.307825   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:39.308159   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:57:39.308199   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:57:39.308131   48724 retry.go:31] will retry after 4.480150028s: waiting for machine to come up
	I0229 18:57:35.706239   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:35.706327   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:35.727095   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:36.206608   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:36.206718   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:36.220509   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:36.707149   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:36.707237   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:36.725852   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:37.206401   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:37.206530   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:37.225323   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:37.706920   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:37.707051   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:37.725340   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:38.207012   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:38.207113   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:38.225343   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:38.706906   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:38.706988   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:38.720820   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:39.206324   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:39.206399   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:39.220757   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:39.706274   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:39.706361   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:39.719994   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:40.206511   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:40.206589   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:40.219998   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:43.790597   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:43.791050   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:57:43.791076   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:57:43.790999   48724 retry.go:31] will retry after 3.830907426s: waiting for machine to come up
	I0229 18:57:40.706115   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:40.706262   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:40.719892   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:41.206440   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:41.206518   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:41.220057   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:41.706585   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:41.706677   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:41.720355   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:42.206977   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:42.207107   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:42.220629   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:42.706185   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:42.706266   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:42.720230   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:43.206832   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:43.206901   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:43.221019   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:43.706611   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:43.706693   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:43.720457   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:43.720489   47608 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0229 18:57:43.720501   47608 kubeadm.go:1135] stopping kube-system containers ...
	I0229 18:57:43.720515   47608 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0229 18:57:43.720572   47608 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 18:57:43.757509   47608 cri.go:89] found id: ""
	I0229 18:57:43.757592   47608 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0229 18:57:43.777950   47608 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 18:57:43.788404   47608 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 18:57:43.788455   47608 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 18:57:43.799322   47608 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0229 18:57:43.799340   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:57:43.907052   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:57:44.731907   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:57:44.940317   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:57:45.040382   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:57:45.113335   47608 api_server.go:52] waiting for apiserver process to appear ...
	I0229 18:57:45.113418   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:57:48.808893   48088 start.go:369] acquired machines lock for "default-k8s-diff-port-153528" in 4m9.434383703s
	I0229 18:57:48.808960   48088 start.go:96] Skipping create...Using existing machine configuration
	I0229 18:57:48.808973   48088 fix.go:54] fixHost starting: 
	I0229 18:57:48.809402   48088 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 18:57:48.809445   48088 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:57:48.829022   48088 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41617
	I0229 18:57:48.829448   48088 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:57:48.830097   48088 main.go:141] libmachine: Using API Version  1
	I0229 18:57:48.830129   48088 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:57:48.830547   48088 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:57:48.830766   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .DriverName
	I0229 18:57:48.830918   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetState
	I0229 18:57:48.832707   48088 fix.go:102] recreateIfNeeded on default-k8s-diff-port-153528: state=Stopped err=<nil>
	I0229 18:57:48.832733   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .DriverName
	W0229 18:57:48.832879   48088 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 18:57:48.834969   48088 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-153528" ...
	I0229 18:57:48.836190   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .Start
	I0229 18:57:48.836352   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Ensuring networks are active...
	I0229 18:57:48.837051   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Ensuring network default is active
	I0229 18:57:48.837440   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Ensuring network mk-default-k8s-diff-port-153528 is active
	I0229 18:57:48.837886   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Getting domain xml...
	I0229 18:57:48.838747   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Creating domain...
	I0229 18:57:47.623408   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:47.623861   47919 main.go:141] libmachine: (old-k8s-version-631080) Found IP for machine: 192.168.83.214
	I0229 18:57:47.623891   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has current primary IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:47.623900   47919 main.go:141] libmachine: (old-k8s-version-631080) Reserving static IP address...
	I0229 18:57:47.624340   47919 main.go:141] libmachine: (old-k8s-version-631080) Reserved static IP address: 192.168.83.214
	I0229 18:57:47.624374   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "old-k8s-version-631080", mac: "52:54:00:1b:b2:7e", ip: "192.168.83.214"} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:57:38 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:57:47.624390   47919 main.go:141] libmachine: (old-k8s-version-631080) Waiting for SSH to be available...
	I0229 18:57:47.624419   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | skip adding static IP to network mk-old-k8s-version-631080 - found existing host DHCP lease matching {name: "old-k8s-version-631080", mac: "52:54:00:1b:b2:7e", ip: "192.168.83.214"}
	I0229 18:57:47.624440   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | Getting to WaitForSSH function...
	I0229 18:57:47.626600   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:47.626881   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:57:38 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:57:47.626904   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:47.627042   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | Using SSH client type: external
	I0229 18:57:47.627070   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | Using SSH private key: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/old-k8s-version-631080/id_rsa (-rw-------)
	I0229 18:57:47.627106   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.214 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18259-6428/.minikube/machines/old-k8s-version-631080/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 18:57:47.627127   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | About to run SSH command:
	I0229 18:57:47.627146   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | exit 0
	I0229 18:57:47.751206   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | SSH cmd err, output: <nil>: 
	I0229 18:57:47.751569   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetConfigRaw
	I0229 18:57:47.752158   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetIP
	I0229 18:57:47.754701   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:47.755064   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:57:38 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:57:47.755089   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:47.755331   47919 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/config.json ...
	I0229 18:57:47.755551   47919 machine.go:88] provisioning docker machine ...
	I0229 18:57:47.755569   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .DriverName
	I0229 18:57:47.755772   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetMachineName
	I0229 18:57:47.755961   47919 buildroot.go:166] provisioning hostname "old-k8s-version-631080"
	I0229 18:57:47.755979   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetMachineName
	I0229 18:57:47.756102   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHHostname
	I0229 18:57:47.758421   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:47.758767   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:57:38 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:57:47.758796   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:47.758895   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHPort
	I0229 18:57:47.759065   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:57:47.759233   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:57:47.759387   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHUsername
	I0229 18:57:47.759548   47919 main.go:141] libmachine: Using SSH client type: native
	I0229 18:57:47.759718   47919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.83.214 22 <nil> <nil>}
	I0229 18:57:47.759730   47919 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-631080 && echo "old-k8s-version-631080" | sudo tee /etc/hostname
	I0229 18:57:47.879204   47919 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-631080
	
	I0229 18:57:47.879233   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHHostname
	I0229 18:57:47.881915   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:47.882207   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:57:38 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:57:47.882237   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:47.882407   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHPort
	I0229 18:57:47.882582   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:57:47.882737   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:57:47.882880   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHUsername
	I0229 18:57:47.883053   47919 main.go:141] libmachine: Using SSH client type: native
	I0229 18:57:47.883244   47919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.83.214 22 <nil> <nil>}
	I0229 18:57:47.883262   47919 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-631080' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-631080/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-631080' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 18:57:47.996920   47919 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 18:57:47.996948   47919 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18259-6428/.minikube CaCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18259-6428/.minikube}
	I0229 18:57:47.996964   47919 buildroot.go:174] setting up certificates
	I0229 18:57:47.996972   47919 provision.go:83] configureAuth start
	I0229 18:57:47.996980   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetMachineName
	I0229 18:57:47.997262   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetIP
	I0229 18:57:47.999702   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.000044   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:57:38 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:57:48.000076   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.000207   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHHostname
	I0229 18:57:48.002169   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.002457   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:57:38 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:57:48.002479   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.002552   47919 provision.go:138] copyHostCerts
	I0229 18:57:48.002600   47919 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem, removing ...
	I0229 18:57:48.002623   47919 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem
	I0229 18:57:48.002690   47919 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem (1082 bytes)
	I0229 18:57:48.002805   47919 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem, removing ...
	I0229 18:57:48.002820   47919 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem
	I0229 18:57:48.002854   47919 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem (1123 bytes)
	I0229 18:57:48.002936   47919 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem, removing ...
	I0229 18:57:48.002946   47919 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem
	I0229 18:57:48.002965   47919 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem (1675 bytes)
	I0229 18:57:48.003030   47919 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-631080 san=[192.168.83.214 192.168.83.214 localhost 127.0.0.1 minikube old-k8s-version-631080]
	I0229 18:57:48.095543   47919 provision.go:172] copyRemoteCerts
	I0229 18:57:48.095594   47919 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 18:57:48.095617   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHHostname
	I0229 18:57:48.098167   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.098411   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:57:38 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:57:48.098439   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.098593   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHPort
	I0229 18:57:48.098770   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:57:48.098910   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHUsername
	I0229 18:57:48.099046   47919 sshutil.go:53] new ssh client: &{IP:192.168.83.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/old-k8s-version-631080/id_rsa Username:docker}
	I0229 18:57:48.178774   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 18:57:48.204896   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0229 18:57:48.234660   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 18:57:48.264189   47919 provision.go:86] duration metric: configureAuth took 267.20486ms
	I0229 18:57:48.264215   47919 buildroot.go:189] setting minikube options for container-runtime
	I0229 18:57:48.264391   47919 config.go:182] Loaded profile config "old-k8s-version-631080": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0229 18:57:48.264464   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHHostname
	I0229 18:57:48.267066   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.267464   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:57:38 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:57:48.267500   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.267721   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHPort
	I0229 18:57:48.267913   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:57:48.268105   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:57:48.268260   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHUsername
	I0229 18:57:48.268425   47919 main.go:141] libmachine: Using SSH client type: native
	I0229 18:57:48.268630   47919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.83.214 22 <nil> <nil>}
	I0229 18:57:48.268658   47919 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0229 18:57:48.560376   47919 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0229 18:57:48.560401   47919 machine.go:91] provisioned docker machine in 804.837627ms
	I0229 18:57:48.560414   47919 start.go:300] post-start starting for "old-k8s-version-631080" (driver="kvm2")
	I0229 18:57:48.560426   47919 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 18:57:48.560450   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .DriverName
	I0229 18:57:48.560751   47919 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 18:57:48.560776   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHHostname
	I0229 18:57:48.563312   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.563638   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:57:38 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:57:48.563670   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.563776   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHPort
	I0229 18:57:48.563971   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:57:48.564126   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHUsername
	I0229 18:57:48.564295   47919 sshutil.go:53] new ssh client: &{IP:192.168.83.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/old-k8s-version-631080/id_rsa Username:docker}
	I0229 18:57:48.646996   47919 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 18:57:48.652329   47919 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 18:57:48.652356   47919 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6428/.minikube/addons for local assets ...
	I0229 18:57:48.652428   47919 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6428/.minikube/files for local assets ...
	I0229 18:57:48.652538   47919 filesync.go:149] local asset: /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem -> 136512.pem in /etc/ssl/certs
	I0229 18:57:48.652665   47919 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 18:57:48.663432   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem --> /etc/ssl/certs/136512.pem (1708 bytes)
	I0229 18:57:48.694980   47919 start.go:303] post-start completed in 134.554808ms
	I0229 18:57:48.695000   47919 fix.go:56] fixHost completed within 22.230801566s
	I0229 18:57:48.695033   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHHostname
	I0229 18:57:48.697788   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.698205   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:57:38 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:57:48.698231   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.698416   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHPort
	I0229 18:57:48.698633   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:57:48.698797   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:57:48.698941   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHUsername
	I0229 18:57:48.699118   47919 main.go:141] libmachine: Using SSH client type: native
	I0229 18:57:48.699327   47919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.83.214 22 <nil> <nil>}
	I0229 18:57:48.699349   47919 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 18:57:48.808714   47919 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709233068.793225740
	
	I0229 18:57:48.808740   47919 fix.go:206] guest clock: 1709233068.793225740
	I0229 18:57:48.808751   47919 fix.go:219] Guest: 2024-02-29 18:57:48.79322574 +0000 UTC Remote: 2024-02-29 18:57:48.695003912 +0000 UTC m=+273.807414604 (delta=98.221828ms)
	I0229 18:57:48.808793   47919 fix.go:190] guest clock delta is within tolerance: 98.221828ms
	I0229 18:57:48.808800   47919 start.go:83] releasing machines lock for "old-k8s-version-631080", held for 22.344627122s
	I0229 18:57:48.808832   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .DriverName
	I0229 18:57:48.809114   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetIP
	I0229 18:57:48.811872   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.812297   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:57:38 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:57:48.812336   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.812522   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .DriverName
	I0229 18:57:48.813072   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .DriverName
	I0229 18:57:48.813270   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .DriverName
	I0229 18:57:48.813347   47919 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 18:57:48.813392   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHHostname
	I0229 18:57:48.813509   47919 ssh_runner.go:195] Run: cat /version.json
	I0229 18:57:48.813536   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHHostname
	I0229 18:57:48.816200   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.816580   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:57:38 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:57:48.816607   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.816676   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.816753   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHPort
	I0229 18:57:48.816939   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:57:48.817097   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHUsername
	I0229 18:57:48.817244   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:57:38 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:57:48.817268   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.817293   47919 sshutil.go:53] new ssh client: &{IP:192.168.83.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/old-k8s-version-631080/id_rsa Username:docker}
	I0229 18:57:48.817420   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHPort
	I0229 18:57:48.817538   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:57:48.817643   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHUsername
	I0229 18:57:48.817769   47919 sshutil.go:53] new ssh client: &{IP:192.168.83.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/old-k8s-version-631080/id_rsa Username:docker}
	I0229 18:57:48.919708   47919 ssh_runner.go:195] Run: systemctl --version
	I0229 18:57:48.926381   47919 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0229 18:57:49.086263   47919 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 18:57:49.093350   47919 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 18:57:49.093427   47919 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 18:57:49.112686   47919 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 18:57:49.112716   47919 start.go:475] detecting cgroup driver to use...
	I0229 18:57:49.112784   47919 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 18:57:49.135232   47919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 18:57:49.152937   47919 docker.go:217] disabling cri-docker service (if available) ...
	I0229 18:57:49.152992   47919 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 18:57:49.172048   47919 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 18:57:49.190450   47919 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 18:57:49.341605   47919 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 18:57:49.539663   47919 docker.go:233] disabling docker service ...
	I0229 18:57:49.539733   47919 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 18:57:49.562110   47919 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 18:57:49.578761   47919 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 18:57:49.739044   47919 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 18:57:49.897866   47919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 18:57:49.918783   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 18:57:45.613998   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:57:46.114525   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:57:46.146283   47608 api_server.go:72] duration metric: took 1.032950423s to wait for apiserver process to appear ...
	I0229 18:57:46.146327   47608 api_server.go:88] waiting for apiserver healthz status ...
	I0229 18:57:46.146344   47608 api_server.go:253] Checking apiserver healthz at https://192.168.61.34:8443/healthz ...
	I0229 18:57:46.146876   47608 api_server.go:269] stopped: https://192.168.61.34:8443/healthz: Get "https://192.168.61.34:8443/healthz": dial tcp 192.168.61.34:8443: connect: connection refused
	I0229 18:57:46.646633   47608 api_server.go:253] Checking apiserver healthz at https://192.168.61.34:8443/healthz ...
	I0229 18:57:49.751381   47608 api_server.go:279] https://192.168.61.34:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 18:57:49.751410   47608 api_server.go:103] status: https://192.168.61.34:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 18:57:49.751427   47608 api_server.go:253] Checking apiserver healthz at https://192.168.61.34:8443/healthz ...
	I0229 18:57:49.791602   47608 api_server.go:279] https://192.168.61.34:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 18:57:49.791634   47608 api_server.go:103] status: https://192.168.61.34:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 18:57:50.147094   47608 api_server.go:253] Checking apiserver healthz at https://192.168.61.34:8443/healthz ...
	I0229 18:57:50.153644   47608 api_server.go:279] https://192.168.61.34:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 18:57:50.153671   47608 api_server.go:103] status: https://192.168.61.34:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 18:57:49.941241   47919 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0229 18:57:49.941328   47919 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:57:49.953131   47919 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0229 18:57:49.953215   47919 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:57:49.964850   47919 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:57:49.976035   47919 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:57:49.988017   47919 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 18:57:50.000990   47919 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 18:57:50.019124   47919 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 18:57:50.019177   47919 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0229 18:57:50.042881   47919 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 18:57:50.054219   47919 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:57:50.213793   47919 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0229 18:57:50.387473   47919 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0229 18:57:50.387536   47919 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0229 18:57:50.395113   47919 start.go:543] Will wait 60s for crictl version
	I0229 18:57:50.395177   47919 ssh_runner.go:195] Run: which crictl
	I0229 18:57:50.400166   47919 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 18:57:50.446910   47919 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0229 18:57:50.447015   47919 ssh_runner.go:195] Run: crio --version
	I0229 18:57:50.486139   47919 ssh_runner.go:195] Run: crio --version
	I0229 18:57:50.528290   47919 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.29.1 ...
	I0229 18:57:50.646967   47608 api_server.go:253] Checking apiserver healthz at https://192.168.61.34:8443/healthz ...
	I0229 18:57:50.660388   47608 api_server.go:279] https://192.168.61.34:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 18:57:50.660420   47608 api_server.go:103] status: https://192.168.61.34:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 18:57:51.146674   47608 api_server.go:253] Checking apiserver healthz at https://192.168.61.34:8443/healthz ...
	I0229 18:57:51.155154   47608 api_server.go:279] https://192.168.61.34:8443/healthz returned 200:
	ok
	I0229 18:57:51.166220   47608 api_server.go:141] control plane version: v1.28.4
	I0229 18:57:51.166255   47608 api_server.go:131] duration metric: took 5.019919259s to wait for apiserver health ...
	I0229 18:57:51.166267   47608 cni.go:84] Creating CNI manager for ""
	I0229 18:57:51.166277   47608 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 18:57:51.168259   47608 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 18:57:50.148417   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting to get IP...
	I0229 18:57:50.149211   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:57:50.149601   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | unable to find current IP address of domain default-k8s-diff-port-153528 in network mk-default-k8s-diff-port-153528
	I0229 18:57:50.149661   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | I0229 18:57:50.149584   48864 retry.go:31] will retry after 287.925969ms: waiting for machine to come up
	I0229 18:57:50.439389   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:57:50.440003   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | unable to find current IP address of domain default-k8s-diff-port-153528 in network mk-default-k8s-diff-port-153528
	I0229 18:57:50.440033   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | I0229 18:57:50.439944   48864 retry.go:31] will retry after 341.540721ms: waiting for machine to come up
	I0229 18:57:50.783988   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:57:50.784594   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | unable to find current IP address of domain default-k8s-diff-port-153528 in network mk-default-k8s-diff-port-153528
	I0229 18:57:50.784622   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | I0229 18:57:50.784544   48864 retry.go:31] will retry after 344.053696ms: waiting for machine to come up
	I0229 18:57:51.130288   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:57:51.131126   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | unable to find current IP address of domain default-k8s-diff-port-153528 in network mk-default-k8s-diff-port-153528
	I0229 18:57:51.131152   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | I0229 18:57:51.131075   48864 retry.go:31] will retry after 593.843769ms: waiting for machine to come up
	I0229 18:57:51.726464   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:57:51.726974   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | unable to find current IP address of domain default-k8s-diff-port-153528 in network mk-default-k8s-diff-port-153528
	I0229 18:57:51.727000   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | I0229 18:57:51.726879   48864 retry.go:31] will retry after 689.199247ms: waiting for machine to come up
	I0229 18:57:52.418297   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:57:52.418801   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | unable to find current IP address of domain default-k8s-diff-port-153528 in network mk-default-k8s-diff-port-153528
	I0229 18:57:52.418829   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | I0229 18:57:52.418753   48864 retry.go:31] will retry after 737.671716ms: waiting for machine to come up
	I0229 18:57:53.158161   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:57:53.158573   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | unable to find current IP address of domain default-k8s-diff-port-153528 in network mk-default-k8s-diff-port-153528
	I0229 18:57:53.158618   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | I0229 18:57:53.158521   48864 retry.go:31] will retry after 1.18162067s: waiting for machine to come up
	I0229 18:57:50.530077   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetIP
	I0229 18:57:50.533389   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:50.533761   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:57:38 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:57:50.533794   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:50.534001   47919 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0229 18:57:50.538857   47919 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:57:50.556961   47919 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0229 18:57:50.557028   47919 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 18:57:50.616925   47919 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0229 18:57:50.617001   47919 ssh_runner.go:195] Run: which lz4
	I0229 18:57:50.622857   47919 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0229 18:57:50.628035   47919 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 18:57:50.628070   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0229 18:57:52.679575   47919 crio.go:444] Took 2.056751 seconds to copy over tarball
	I0229 18:57:52.679656   47919 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 18:57:51.169655   47608 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 18:57:51.184521   47608 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 18:57:51.215791   47608 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 18:57:51.235050   47608 system_pods.go:59] 8 kube-system pods found
	I0229 18:57:51.235136   47608 system_pods.go:61] "coredns-5dd5756b68-6b5pm" [d8023f3b-fc07-4dd4-98dc-bd321d137a06] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 18:57:51.235150   47608 system_pods.go:61] "etcd-embed-certs-991128" [01a1ee82-a650-4736-8fb9-e983427bef96] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0229 18:57:51.235161   47608 system_pods.go:61] "kube-apiserver-embed-certs-991128" [a6810e01-a958-4e7b-ba0f-6cd2e747b998] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0229 18:57:51.235170   47608 system_pods.go:61] "kube-controller-manager-embed-certs-991128" [6469e9c8-7372-4756-926d-0de600c8ed4f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0229 18:57:51.235179   47608 system_pods.go:61] "kube-proxy-zd7rf" [963b5fb6-f287-4c80-a324-b0cb09b1ae97] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0229 18:57:51.235190   47608 system_pods.go:61] "kube-scheduler-embed-certs-991128" [ac2e7c83-6e96-46ba-aeed-c847d312ba4c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0229 18:57:51.235199   47608 system_pods.go:61] "metrics-server-57f55c9bc5-5w6c9" [6ddb9b39-e1d1-4d34-bb45-e9a5c161f13d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 18:57:51.235220   47608 system_pods.go:61] "storage-provisioner" [99d0cbe5-bb8b-472b-be91-9f29442c8c1d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0229 18:57:51.235243   47608 system_pods.go:74] duration metric: took 19.430245ms to wait for pod list to return data ...
	I0229 18:57:51.235257   47608 node_conditions.go:102] verifying NodePressure condition ...
	I0229 18:57:51.241823   47608 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 18:57:51.241849   47608 node_conditions.go:123] node cpu capacity is 2
	I0229 18:57:51.241863   47608 node_conditions.go:105] duration metric: took 6.600606ms to run NodePressure ...
	I0229 18:57:51.241884   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:57:51.654038   47608 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0229 18:57:51.663120   47608 kubeadm.go:787] kubelet initialised
	I0229 18:57:51.663146   47608 kubeadm.go:788] duration metric: took 9.079921ms waiting for restarted kubelet to initialise ...
	I0229 18:57:51.663156   47608 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 18:57:51.671417   47608 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-6b5pm" in "kube-system" namespace to be "Ready" ...
	I0229 18:57:53.679921   47608 pod_ready.go:102] pod "coredns-5dd5756b68-6b5pm" in "kube-system" namespace has status "Ready":"False"
	I0229 18:57:54.342488   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:57:54.342981   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | unable to find current IP address of domain default-k8s-diff-port-153528 in network mk-default-k8s-diff-port-153528
	I0229 18:57:54.343006   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | I0229 18:57:54.342931   48864 retry.go:31] will retry after 1.180730966s: waiting for machine to come up
	I0229 18:57:55.524954   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:57:55.525398   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | unable to find current IP address of domain default-k8s-diff-port-153528 in network mk-default-k8s-diff-port-153528
	I0229 18:57:55.525427   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | I0229 18:57:55.525338   48864 retry.go:31] will retry after 1.706902899s: waiting for machine to come up
	I0229 18:57:57.233340   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:57:57.233834   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | unable to find current IP address of domain default-k8s-diff-port-153528 in network mk-default-k8s-diff-port-153528
	I0229 18:57:57.233862   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | I0229 18:57:57.233791   48864 retry.go:31] will retry after 2.281506267s: waiting for machine to come up
	I0229 18:57:55.661321   47919 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.981628592s)
	I0229 18:57:55.661351   47919 crio.go:451] Took 2.981744 seconds to extract the tarball
	I0229 18:57:55.661363   47919 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 18:57:55.708924   47919 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 18:57:55.751627   47919 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0229 18:57:55.751650   47919 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0229 18:57:55.751726   47919 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:57:55.751752   47919 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 18:57:55.751758   47919 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0229 18:57:55.751735   47919 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 18:57:55.751751   47919 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0229 18:57:55.751772   47919 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 18:57:55.751864   47919 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0229 18:57:55.752153   47919 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 18:57:55.753139   47919 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0229 18:57:55.753456   47919 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:57:55.753467   47919 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 18:57:55.753476   47919 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 18:57:55.753476   47919 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 18:57:55.753476   47919 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0229 18:57:55.753486   47919 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0229 18:57:55.753547   47919 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 18:57:55.934620   47919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0229 18:57:55.988723   47919 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0229 18:57:55.988767   47919 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0229 18:57:55.988811   47919 ssh_runner.go:195] Run: which crictl
	I0229 18:57:55.993750   47919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0229 18:57:56.036192   47919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0229 18:57:56.037872   47919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0229 18:57:56.038123   47919 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0229 18:57:56.040846   47919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0229 18:57:56.046242   47919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0229 18:57:56.065126   47919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 18:57:56.077683   47919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0229 18:57:56.126642   47919 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0229 18:57:56.126683   47919 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 18:57:56.126741   47919 ssh_runner.go:195] Run: which crictl
	I0229 18:57:56.191928   47919 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0229 18:57:56.191980   47919 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 18:57:56.192006   47919 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0229 18:57:56.192037   47919 ssh_runner.go:195] Run: which crictl
	I0229 18:57:56.192045   47919 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0229 18:57:56.192086   47919 ssh_runner.go:195] Run: which crictl
	I0229 18:57:56.203773   47919 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0229 18:57:56.203819   47919 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 18:57:56.203863   47919 ssh_runner.go:195] Run: which crictl
	I0229 18:57:56.227761   47919 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0229 18:57:56.227799   47919 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 18:57:56.227832   47919 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0229 18:57:56.227856   47919 ssh_runner.go:195] Run: which crictl
	I0229 18:57:56.227864   47919 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0229 18:57:56.227876   47919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0229 18:57:56.227922   47919 ssh_runner.go:195] Run: which crictl
	I0229 18:57:56.227925   47919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0229 18:57:56.227956   47919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0229 18:57:56.227961   47919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0229 18:57:56.246645   47919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0229 18:57:56.344012   47919 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0229 18:57:56.344125   47919 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0229 18:57:56.346352   47919 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0229 18:57:56.361309   47919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 18:57:56.361484   47919 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0229 18:57:56.383942   47919 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0229 18:57:56.411697   47919 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0229 18:57:56.649625   47919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:57:56.801430   47919 cache_images.go:92] LoadImages completed in 1.049765957s
	W0229 18:57:56.801578   47919 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0: no such file or directory
	I0229 18:57:56.801670   47919 ssh_runner.go:195] Run: crio config
	I0229 18:57:56.872210   47919 cni.go:84] Creating CNI manager for ""
	I0229 18:57:56.872238   47919 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 18:57:56.872260   47919 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 18:57:56.872283   47919 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.214 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-631080 NodeName:old-k8s-version-631080 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.214"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.214 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0229 18:57:56.872458   47919 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.214
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-631080"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.214
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.214"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-631080
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.83.214:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 18:57:56.872545   47919 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-631080 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.83.214
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-631080 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 18:57:56.872620   47919 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0229 18:57:56.884571   47919 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 18:57:56.884647   47919 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 18:57:56.896167   47919 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0229 18:57:56.916824   47919 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 18:57:56.938739   47919 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I0229 18:57:56.961411   47919 ssh_runner.go:195] Run: grep 192.168.83.214	control-plane.minikube.internal$ /etc/hosts
	I0229 18:57:56.966134   47919 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.214	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:57:56.981089   47919 certs.go:56] Setting up /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080 for IP: 192.168.83.214
	I0229 18:57:56.981121   47919 certs.go:190] acquiring lock for shared ca certs: {Name:mk3505d2468da66ac20dc3ebf913782cecf1ba0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:57:56.981295   47919 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18259-6428/.minikube/ca.key
	I0229 18:57:56.981358   47919 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.key
	I0229 18:57:56.981465   47919 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/client.key
	I0229 18:57:56.981533   47919 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/apiserver.key.89a58109
	I0229 18:57:56.981586   47919 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/proxy-client.key
	I0229 18:57:56.981755   47919 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651.pem (1338 bytes)
	W0229 18:57:56.981791   47919 certs.go:433] ignoring /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651_empty.pem, impossibly tiny 0 bytes
	I0229 18:57:56.981806   47919 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem (1675 bytes)
	I0229 18:57:56.981845   47919 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem (1082 bytes)
	I0229 18:57:56.981878   47919 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem (1123 bytes)
	I0229 18:57:56.981910   47919 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem (1675 bytes)
	I0229 18:57:56.981961   47919 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem (1708 bytes)
	I0229 18:57:56.982889   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 18:57:57.015587   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0229 18:57:57.048698   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 18:57:57.078634   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 18:57:57.114008   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 18:57:57.146884   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0229 18:57:57.179560   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 18:57:57.211989   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0229 18:57:57.246936   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651.pem --> /usr/share/ca-certificates/13651.pem (1338 bytes)
	I0229 18:57:57.280651   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem --> /usr/share/ca-certificates/136512.pem (1708 bytes)
	I0229 18:57:57.310050   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 18:57:57.337439   47919 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 18:57:57.359100   47919 ssh_runner.go:195] Run: openssl version
	I0229 18:57:57.366111   47919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13651.pem && ln -fs /usr/share/ca-certificates/13651.pem /etc/ssl/certs/13651.pem"
	I0229 18:57:57.380593   47919 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13651.pem
	I0229 18:57:57.386703   47919 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 17:49 /usr/share/ca-certificates/13651.pem
	I0229 18:57:57.386771   47919 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13651.pem
	I0229 18:57:57.401429   47919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13651.pem /etc/ssl/certs/51391683.0"
	I0229 18:57:57.416516   47919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136512.pem && ln -fs /usr/share/ca-certificates/136512.pem /etc/ssl/certs/136512.pem"
	I0229 18:57:57.429645   47919 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136512.pem
	I0229 18:57:57.434960   47919 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 17:49 /usr/share/ca-certificates/136512.pem
	I0229 18:57:57.435010   47919 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136512.pem
	I0229 18:57:57.441855   47919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136512.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 18:57:57.457277   47919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 18:57:57.471345   47919 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:57:57.476556   47919 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 17:40 /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:57:57.476629   47919 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:57:57.483318   47919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 18:57:57.496355   47919 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 18:57:57.501976   47919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0229 18:57:57.509611   47919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0229 18:57:57.516861   47919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0229 18:57:57.523819   47919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0229 18:57:57.530959   47919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0229 18:57:57.539788   47919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0229 18:57:57.548575   47919 kubeadm.go:404] StartCluster: {Name:old-k8s-version-631080 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-631080 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.83.214 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:57:57.548663   47919 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0229 18:57:57.548731   47919 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 18:57:57.596234   47919 cri.go:89] found id: ""
	I0229 18:57:57.596327   47919 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 18:57:57.612827   47919 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0229 18:57:57.612856   47919 kubeadm.go:636] restartCluster start
	I0229 18:57:57.612903   47919 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0229 18:57:57.627565   47919 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:57.629049   47919 kubeconfig.go:135] verify returned: extract IP: "old-k8s-version-631080" does not appear in /home/jenkins/minikube-integration/18259-6428/kubeconfig
	I0229 18:57:57.630139   47919 kubeconfig.go:146] "old-k8s-version-631080" context is missing from /home/jenkins/minikube-integration/18259-6428/kubeconfig - will repair!
	I0229 18:57:57.631735   47919 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/kubeconfig: {Name:mk7125f243525b7f0feb85371d9c568ed8e0cf7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:57:57.634318   47919 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0229 18:57:57.648383   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:57:57.648458   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:57.663708   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:58.149010   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:57:58.149086   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:58.164430   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:58.649075   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:57:58.649186   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:58.663768   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:59.149370   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:57:59.149450   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:59.165089   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:59.648609   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:57:59.648690   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:59.665224   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:56.182137   47608 pod_ready.go:102] pod "coredns-5dd5756b68-6b5pm" in "kube-system" namespace has status "Ready":"False"
	I0229 18:57:58.681550   47608 pod_ready.go:102] pod "coredns-5dd5756b68-6b5pm" in "kube-system" namespace has status "Ready":"False"
	I0229 18:57:59.517428   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:57:59.518040   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | unable to find current IP address of domain default-k8s-diff-port-153528 in network mk-default-k8s-diff-port-153528
	I0229 18:57:59.518069   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | I0229 18:57:59.517984   48864 retry.go:31] will retry after 2.738727804s: waiting for machine to come up
	I0229 18:58:02.258042   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:02.258540   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | unable to find current IP address of domain default-k8s-diff-port-153528 in network mk-default-k8s-diff-port-153528
	I0229 18:58:02.258569   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | I0229 18:58:02.258498   48864 retry.go:31] will retry after 2.520892118s: waiting for machine to come up
	I0229 18:58:00.148880   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:00.148969   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:00.168561   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:00.649227   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:00.649308   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:00.668162   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:01.148539   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:01.148600   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:01.168347   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:01.649392   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:01.649484   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:01.663974   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:02.149462   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:02.149548   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:02.164757   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:02.649398   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:02.649522   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:02.664014   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:03.148502   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:03.148718   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:03.165374   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:03.648528   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:03.648594   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:03.663305   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:04.148760   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:04.148847   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:04.163480   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:04.649122   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:04.649226   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:04.663556   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:01.179941   47608 pod_ready.go:102] pod "coredns-5dd5756b68-6b5pm" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:03.679523   47608 pod_ready.go:102] pod "coredns-5dd5756b68-6b5pm" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:04.179171   47608 pod_ready.go:92] pod "coredns-5dd5756b68-6b5pm" in "kube-system" namespace has status "Ready":"True"
	I0229 18:58:04.179198   47608 pod_ready.go:81] duration metric: took 12.507755709s waiting for pod "coredns-5dd5756b68-6b5pm" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:04.179212   47608 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-991128" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:04.184638   47608 pod_ready.go:92] pod "etcd-embed-certs-991128" in "kube-system" namespace has status "Ready":"True"
	I0229 18:58:04.184657   47608 pod_ready.go:81] duration metric: took 5.438559ms waiting for pod "etcd-embed-certs-991128" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:04.184665   47608 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-991128" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:04.189119   47608 pod_ready.go:92] pod "kube-apiserver-embed-certs-991128" in "kube-system" namespace has status "Ready":"True"
	I0229 18:58:04.189139   47608 pod_ready.go:81] duration metric: took 4.467998ms waiting for pod "kube-apiserver-embed-certs-991128" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:04.189147   47608 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-991128" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:04.193800   47608 pod_ready.go:92] pod "kube-controller-manager-embed-certs-991128" in "kube-system" namespace has status "Ready":"True"
	I0229 18:58:04.193819   47608 pod_ready.go:81] duration metric: took 4.66771ms waiting for pod "kube-controller-manager-embed-certs-991128" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:04.193827   47608 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zd7rf" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:04.198220   47608 pod_ready.go:92] pod "kube-proxy-zd7rf" in "kube-system" namespace has status "Ready":"True"
	I0229 18:58:04.198239   47608 pod_ready.go:81] duration metric: took 4.405824ms waiting for pod "kube-proxy-zd7rf" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:04.198246   47608 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-991128" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:04.575846   47608 pod_ready.go:92] pod "kube-scheduler-embed-certs-991128" in "kube-system" namespace has status "Ready":"True"
	I0229 18:58:04.575869   47608 pod_ready.go:81] duration metric: took 377.617228ms waiting for pod "kube-scheduler-embed-certs-991128" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:04.575878   47608 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:04.780871   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:04.781307   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | unable to find current IP address of domain default-k8s-diff-port-153528 in network mk-default-k8s-diff-port-153528
	I0229 18:58:04.781334   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | I0229 18:58:04.781266   48864 retry.go:31] will retry after 3.73331916s: waiting for machine to come up
	I0229 18:58:08.519173   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:08.519646   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Found IP for machine: 192.168.39.210
	I0229 18:58:08.519666   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Reserving static IP address...
	I0229 18:58:08.519687   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has current primary IP address 192.168.39.210 and MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:08.520011   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-153528", mac: "52:54:00:78:ec:2b", ip: "192.168.39.210"} in network mk-default-k8s-diff-port-153528: {Iface:virbr3 ExpiryTime:2024-02-29 19:58:02 +0000 UTC Type:0 Mac:52:54:00:78:ec:2b Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:default-k8s-diff-port-153528 Clientid:01:52:54:00:78:ec:2b}
	I0229 18:58:08.520032   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Reserved static IP address: 192.168.39.210
	I0229 18:58:08.520046   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | skip adding static IP to network mk-default-k8s-diff-port-153528 - found existing host DHCP lease matching {name: "default-k8s-diff-port-153528", mac: "52:54:00:78:ec:2b", ip: "192.168.39.210"}
	I0229 18:58:08.520057   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | Getting to WaitForSSH function...
	I0229 18:58:08.520067   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for SSH to be available...
	I0229 18:58:08.522047   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:08.522377   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ec:2b", ip: ""} in network mk-default-k8s-diff-port-153528: {Iface:virbr3 ExpiryTime:2024-02-29 19:58:02 +0000 UTC Type:0 Mac:52:54:00:78:ec:2b Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:default-k8s-diff-port-153528 Clientid:01:52:54:00:78:ec:2b}
	I0229 18:58:08.522411   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined IP address 192.168.39.210 and MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:08.522529   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | Using SSH client type: external
	I0229 18:58:08.522555   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | Using SSH private key: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/default-k8s-diff-port-153528/id_rsa (-rw-------)
	I0229 18:58:08.522592   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.210 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18259-6428/.minikube/machines/default-k8s-diff-port-153528/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 18:58:08.522606   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | About to run SSH command:
	I0229 18:58:08.522616   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | exit 0
	I0229 18:58:08.651113   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | SSH cmd err, output: <nil>: 
	I0229 18:58:08.651447   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetConfigRaw
	I0229 18:58:08.652078   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetIP
	I0229 18:58:08.654739   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:08.655191   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ec:2b", ip: ""} in network mk-default-k8s-diff-port-153528: {Iface:virbr3 ExpiryTime:2024-02-29 19:58:02 +0000 UTC Type:0 Mac:52:54:00:78:ec:2b Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:default-k8s-diff-port-153528 Clientid:01:52:54:00:78:ec:2b}
	I0229 18:58:08.655222   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined IP address 192.168.39.210 and MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:08.655516   48088 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/default-k8s-diff-port-153528/config.json ...
	I0229 18:58:08.655758   48088 machine.go:88] provisioning docker machine ...
	I0229 18:58:08.655787   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .DriverName
	I0229 18:58:08.655976   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetMachineName
	I0229 18:58:08.656109   48088 buildroot.go:166] provisioning hostname "default-k8s-diff-port-153528"
	I0229 18:58:08.656127   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetMachineName
	I0229 18:58:08.656273   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHHostname
	I0229 18:58:08.658580   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:08.658933   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ec:2b", ip: ""} in network mk-default-k8s-diff-port-153528: {Iface:virbr3 ExpiryTime:2024-02-29 19:58:02 +0000 UTC Type:0 Mac:52:54:00:78:ec:2b Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:default-k8s-diff-port-153528 Clientid:01:52:54:00:78:ec:2b}
	I0229 18:58:08.658961   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined IP address 192.168.39.210 and MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:08.659066   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHPort
	I0229 18:58:08.659255   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHKeyPath
	I0229 18:58:08.659419   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHKeyPath
	I0229 18:58:08.659547   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHUsername
	I0229 18:58:08.659714   48088 main.go:141] libmachine: Using SSH client type: native
	I0229 18:58:08.659933   48088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0229 18:58:08.659952   48088 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-153528 && echo "default-k8s-diff-port-153528" | sudo tee /etc/hostname
	I0229 18:58:08.782704   48088 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-153528
	
	I0229 18:58:08.782727   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHHostname
	I0229 18:58:08.785588   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:08.785939   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ec:2b", ip: ""} in network mk-default-k8s-diff-port-153528: {Iface:virbr3 ExpiryTime:2024-02-29 19:58:02 +0000 UTC Type:0 Mac:52:54:00:78:ec:2b Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:default-k8s-diff-port-153528 Clientid:01:52:54:00:78:ec:2b}
	I0229 18:58:08.785967   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined IP address 192.168.39.210 and MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:08.786107   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHPort
	I0229 18:58:08.786290   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHKeyPath
	I0229 18:58:08.786430   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHKeyPath
	I0229 18:58:08.786550   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHUsername
	I0229 18:58:08.786675   48088 main.go:141] libmachine: Using SSH client type: native
	I0229 18:58:08.786910   48088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0229 18:58:08.786937   48088 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-153528' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-153528/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-153528' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 18:58:08.906593   48088 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 18:58:08.906630   48088 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18259-6428/.minikube CaCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18259-6428/.minikube}
	I0229 18:58:08.906671   48088 buildroot.go:174] setting up certificates
	I0229 18:58:08.906683   48088 provision.go:83] configureAuth start
	I0229 18:58:08.906700   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetMachineName
	I0229 18:58:08.906992   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetIP
	I0229 18:58:08.909897   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:08.910266   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ec:2b", ip: ""} in network mk-default-k8s-diff-port-153528: {Iface:virbr3 ExpiryTime:2024-02-29 19:58:02 +0000 UTC Type:0 Mac:52:54:00:78:ec:2b Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:default-k8s-diff-port-153528 Clientid:01:52:54:00:78:ec:2b}
	I0229 18:58:08.910299   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined IP address 192.168.39.210 and MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:08.910420   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHHostname
	I0229 18:58:08.912899   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:08.913333   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ec:2b", ip: ""} in network mk-default-k8s-diff-port-153528: {Iface:virbr3 ExpiryTime:2024-02-29 19:58:02 +0000 UTC Type:0 Mac:52:54:00:78:ec:2b Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:default-k8s-diff-port-153528 Clientid:01:52:54:00:78:ec:2b}
	I0229 18:58:08.913363   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined IP address 192.168.39.210 and MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:08.913526   48088 provision.go:138] copyHostCerts
	I0229 18:58:08.913589   48088 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem, removing ...
	I0229 18:58:08.913602   48088 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem
	I0229 18:58:08.913671   48088 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem (1082 bytes)
	I0229 18:58:08.913796   48088 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem, removing ...
	I0229 18:58:08.913808   48088 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem
	I0229 18:58:08.913838   48088 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem (1123 bytes)
	I0229 18:58:08.913920   48088 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem, removing ...
	I0229 18:58:08.913940   48088 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem
	I0229 18:58:08.913969   48088 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem (1675 bytes)
	I0229 18:58:08.914052   48088 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-153528 san=[192.168.39.210 192.168.39.210 localhost 127.0.0.1 minikube default-k8s-diff-port-153528]
	I0229 18:58:09.033009   48088 provision.go:172] copyRemoteCerts
	I0229 18:58:09.033064   48088 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 18:58:09.033087   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHHostname
	I0229 18:58:09.035647   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:09.036023   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ec:2b", ip: ""} in network mk-default-k8s-diff-port-153528: {Iface:virbr3 ExpiryTime:2024-02-29 19:58:02 +0000 UTC Type:0 Mac:52:54:00:78:ec:2b Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:default-k8s-diff-port-153528 Clientid:01:52:54:00:78:ec:2b}
	I0229 18:58:09.036061   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined IP address 192.168.39.210 and MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:09.036262   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHPort
	I0229 18:58:09.036434   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHKeyPath
	I0229 18:58:09.036582   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHUsername
	I0229 18:58:09.036685   48088 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/default-k8s-diff-port-153528/id_rsa Username:docker}
	I0229 18:58:09.127168   48088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 18:58:09.162113   48088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0229 18:58:09.191657   48088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0229 18:58:09.224555   48088 provision.go:86] duration metric: configureAuth took 317.8564ms
	I0229 18:58:09.224589   48088 buildroot.go:189] setting minikube options for container-runtime
	I0229 18:58:09.224789   48088 config.go:182] Loaded profile config "default-k8s-diff-port-153528": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 18:58:09.224877   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHHostname
	I0229 18:58:09.227193   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:09.227549   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ec:2b", ip: ""} in network mk-default-k8s-diff-port-153528: {Iface:virbr3 ExpiryTime:2024-02-29 19:58:02 +0000 UTC Type:0 Mac:52:54:00:78:ec:2b Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:default-k8s-diff-port-153528 Clientid:01:52:54:00:78:ec:2b}
	I0229 18:58:09.227577   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined IP address 192.168.39.210 and MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:09.227731   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHPort
	I0229 18:58:09.227950   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHKeyPath
	I0229 18:58:09.228111   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHKeyPath
	I0229 18:58:09.228266   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHUsername
	I0229 18:58:09.228398   48088 main.go:141] libmachine: Using SSH client type: native
	I0229 18:58:09.228595   48088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0229 18:58:09.228617   48088 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0229 18:58:09.760261   47515 start.go:369] acquired machines lock for "no-preload-247197" in 59.368392801s
	I0229 18:58:09.760316   47515 start.go:96] Skipping create...Using existing machine configuration
	I0229 18:58:09.760326   47515 fix.go:54] fixHost starting: 
	I0229 18:58:09.760731   47515 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 18:58:09.760768   47515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:58:09.777304   47515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45123
	I0229 18:58:09.777781   47515 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:58:09.778277   47515 main.go:141] libmachine: Using API Version  1
	I0229 18:58:09.778301   47515 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:58:09.778655   47515 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:58:09.778829   47515 main.go:141] libmachine: (no-preload-247197) Calling .DriverName
	I0229 18:58:09.779012   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetState
	I0229 18:58:09.780644   47515 fix.go:102] recreateIfNeeded on no-preload-247197: state=Stopped err=<nil>
	I0229 18:58:09.780670   47515 main.go:141] libmachine: (no-preload-247197) Calling .DriverName
	W0229 18:58:09.780844   47515 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 18:58:09.782653   47515 out.go:177] * Restarting existing kvm2 VM for "no-preload-247197" ...
	I0229 18:58:05.149421   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:05.149514   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:05.164236   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:05.648767   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:05.648856   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:05.664890   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:06.148979   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:06.149069   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:06.165186   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:06.649135   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:06.649245   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:06.665357   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:07.148896   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:07.148978   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:07.163358   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:07.649238   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:07.649309   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:07.665329   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:07.665359   47919 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0229 18:58:07.665368   47919 kubeadm.go:1135] stopping kube-system containers ...
	I0229 18:58:07.665378   47919 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0229 18:58:07.665433   47919 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 18:58:07.713980   47919 cri.go:89] found id: ""
	I0229 18:58:07.714045   47919 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0229 18:58:07.740723   47919 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 18:58:07.753838   47919 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 18:58:07.753914   47919 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 18:58:07.767175   47919 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0229 18:58:07.767197   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:58:07.902881   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:58:08.741237   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:58:08.970287   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:58:09.099101   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:58:09.214816   47919 api_server.go:52] waiting for apiserver process to appear ...
	I0229 18:58:09.214897   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:09.715311   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:06.583750   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:09.083063   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:09.517694   48088 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0229 18:58:09.517720   48088 machine.go:91] provisioned docker machine in 861.950931ms
	I0229 18:58:09.517732   48088 start.go:300] post-start starting for "default-k8s-diff-port-153528" (driver="kvm2")
	I0229 18:58:09.517742   48088 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 18:58:09.517755   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .DriverName
	I0229 18:58:09.518097   48088 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 18:58:09.518134   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHHostname
	I0229 18:58:09.520915   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:09.521255   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ec:2b", ip: ""} in network mk-default-k8s-diff-port-153528: {Iface:virbr3 ExpiryTime:2024-02-29 19:58:02 +0000 UTC Type:0 Mac:52:54:00:78:ec:2b Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:default-k8s-diff-port-153528 Clientid:01:52:54:00:78:ec:2b}
	I0229 18:58:09.521285   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined IP address 192.168.39.210 and MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:09.521389   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHPort
	I0229 18:58:09.521590   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHKeyPath
	I0229 18:58:09.521761   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHUsername
	I0229 18:58:09.521911   48088 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/default-k8s-diff-port-153528/id_rsa Username:docker}
	I0229 18:58:09.606485   48088 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 18:58:09.611376   48088 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 18:58:09.611404   48088 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6428/.minikube/addons for local assets ...
	I0229 18:58:09.611468   48088 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6428/.minikube/files for local assets ...
	I0229 18:58:09.611564   48088 filesync.go:149] local asset: /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem -> 136512.pem in /etc/ssl/certs
	I0229 18:58:09.611679   48088 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 18:58:09.621573   48088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem --> /etc/ssl/certs/136512.pem (1708 bytes)
	I0229 18:58:09.648803   48088 start.go:303] post-start completed in 131.058856ms
	I0229 18:58:09.648825   48088 fix.go:56] fixHost completed within 20.839852585s
	I0229 18:58:09.648848   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHHostname
	I0229 18:58:09.651416   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:09.651743   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ec:2b", ip: ""} in network mk-default-k8s-diff-port-153528: {Iface:virbr3 ExpiryTime:2024-02-29 19:58:02 +0000 UTC Type:0 Mac:52:54:00:78:ec:2b Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:default-k8s-diff-port-153528 Clientid:01:52:54:00:78:ec:2b}
	I0229 18:58:09.651771   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined IP address 192.168.39.210 and MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:09.651917   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHPort
	I0229 18:58:09.652114   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHKeyPath
	I0229 18:58:09.652273   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHKeyPath
	I0229 18:58:09.652392   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHUsername
	I0229 18:58:09.652563   48088 main.go:141] libmachine: Using SSH client type: native
	I0229 18:58:09.652715   48088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0229 18:58:09.652728   48088 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 18:58:09.760132   48088 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709233089.743154671
	
	I0229 18:58:09.760154   48088 fix.go:206] guest clock: 1709233089.743154671
	I0229 18:58:09.760160   48088 fix.go:219] Guest: 2024-02-29 18:58:09.743154671 +0000 UTC Remote: 2024-02-29 18:58:09.648829212 +0000 UTC m=+270.421886207 (delta=94.325459ms)
	I0229 18:58:09.760177   48088 fix.go:190] guest clock delta is within tolerance: 94.325459ms
	I0229 18:58:09.760183   48088 start.go:83] releasing machines lock for "default-k8s-diff-port-153528", held for 20.951247697s
	I0229 18:58:09.760211   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .DriverName
	I0229 18:58:09.760473   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetIP
	I0229 18:58:09.763342   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:09.763701   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ec:2b", ip: ""} in network mk-default-k8s-diff-port-153528: {Iface:virbr3 ExpiryTime:2024-02-29 19:58:02 +0000 UTC Type:0 Mac:52:54:00:78:ec:2b Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:default-k8s-diff-port-153528 Clientid:01:52:54:00:78:ec:2b}
	I0229 18:58:09.763746   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined IP address 192.168.39.210 and MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:09.763896   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .DriverName
	I0229 18:58:09.764519   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .DriverName
	I0229 18:58:09.764720   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .DriverName
	I0229 18:58:09.764801   48088 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 18:58:09.764849   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHHostname
	I0229 18:58:09.764951   48088 ssh_runner.go:195] Run: cat /version.json
	I0229 18:58:09.764981   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHHostname
	I0229 18:58:09.767670   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:09.767861   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:09.768035   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ec:2b", ip: ""} in network mk-default-k8s-diff-port-153528: {Iface:virbr3 ExpiryTime:2024-02-29 19:58:02 +0000 UTC Type:0 Mac:52:54:00:78:ec:2b Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:default-k8s-diff-port-153528 Clientid:01:52:54:00:78:ec:2b}
	I0229 18:58:09.768054   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined IP address 192.168.39.210 and MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:09.768204   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHPort
	I0229 18:58:09.768322   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ec:2b", ip: ""} in network mk-default-k8s-diff-port-153528: {Iface:virbr3 ExpiryTime:2024-02-29 19:58:02 +0000 UTC Type:0 Mac:52:54:00:78:ec:2b Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:default-k8s-diff-port-153528 Clientid:01:52:54:00:78:ec:2b}
	I0229 18:58:09.768345   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined IP address 192.168.39.210 and MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:09.768347   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHKeyPath
	I0229 18:58:09.768504   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHUsername
	I0229 18:58:09.768518   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHPort
	I0229 18:58:09.768673   48088 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/default-k8s-diff-port-153528/id_rsa Username:docker}
	I0229 18:58:09.768694   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHKeyPath
	I0229 18:58:09.768890   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHUsername
	I0229 18:58:09.769024   48088 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/default-k8s-diff-port-153528/id_rsa Username:docker}
	I0229 18:58:09.849055   48088 ssh_runner.go:195] Run: systemctl --version
	I0229 18:58:09.872309   48088 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0229 18:58:10.015348   48088 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 18:58:10.023333   48088 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 18:58:10.023405   48088 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 18:58:10.042264   48088 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 18:58:10.042288   48088 start.go:475] detecting cgroup driver to use...
	I0229 18:58:10.042361   48088 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 18:58:10.062390   48088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 18:58:10.080651   48088 docker.go:217] disabling cri-docker service (if available) ...
	I0229 18:58:10.080714   48088 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 18:58:10.098478   48088 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 18:58:10.115610   48088 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 18:58:10.250212   48088 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 18:58:10.402800   48088 docker.go:233] disabling docker service ...
	I0229 18:58:10.402862   48088 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 18:58:10.419793   48088 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 18:58:10.435149   48088 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 18:58:10.589671   48088 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 18:58:10.714460   48088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 18:58:10.730820   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 18:58:10.753910   48088 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0229 18:58:10.753977   48088 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:58:10.766151   48088 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0229 18:58:10.766232   48088 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:58:10.778824   48088 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:58:10.792936   48088 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:58:10.810158   48088 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 18:58:10.828150   48088 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 18:58:10.843416   48088 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 18:58:10.843488   48088 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0229 18:58:10.866488   48088 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 18:58:10.880628   48088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:58:11.031221   48088 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0229 18:58:11.199068   48088 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0229 18:58:11.199143   48088 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0229 18:58:11.204851   48088 start.go:543] Will wait 60s for crictl version
	I0229 18:58:11.204922   48088 ssh_runner.go:195] Run: which crictl
	I0229 18:58:11.209384   48088 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 18:58:11.256928   48088 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0229 18:58:11.257014   48088 ssh_runner.go:195] Run: crio --version
	I0229 18:58:11.293338   48088 ssh_runner.go:195] Run: crio --version
	I0229 18:58:11.329107   48088 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0229 18:58:09.783970   47515 main.go:141] libmachine: (no-preload-247197) Calling .Start
	I0229 18:58:09.784127   47515 main.go:141] libmachine: (no-preload-247197) Ensuring networks are active...
	I0229 18:58:09.784926   47515 main.go:141] libmachine: (no-preload-247197) Ensuring network default is active
	I0229 18:58:09.785291   47515 main.go:141] libmachine: (no-preload-247197) Ensuring network mk-no-preload-247197 is active
	I0229 18:58:09.785654   47515 main.go:141] libmachine: (no-preload-247197) Getting domain xml...
	I0229 18:58:09.786319   47515 main.go:141] libmachine: (no-preload-247197) Creating domain...
	I0229 18:58:11.102135   47515 main.go:141] libmachine: (no-preload-247197) Waiting to get IP...
	I0229 18:58:11.102911   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:11.103333   47515 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:58:11.103414   47515 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:58:11.103321   49001 retry.go:31] will retry after 205.990392ms: waiting for machine to come up
	I0229 18:58:11.310742   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:11.311298   47515 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:58:11.311327   47515 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:58:11.311247   49001 retry.go:31] will retry after 353.756736ms: waiting for machine to come up
	I0229 18:58:11.666882   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:11.667361   47515 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:58:11.667392   47515 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:58:11.667319   49001 retry.go:31] will retry after 308.284801ms: waiting for machine to come up
	I0229 18:58:11.976805   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:11.977355   47515 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:58:11.977385   47515 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:58:11.977309   49001 retry.go:31] will retry after 481.108836ms: waiting for machine to come up
	I0229 18:58:12.459764   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:12.460292   47515 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:58:12.460330   47515 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:58:12.460253   49001 retry.go:31] will retry after 549.22451ms: waiting for machine to come up
	I0229 18:58:11.330594   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetIP
	I0229 18:58:11.333628   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:11.334080   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ec:2b", ip: ""} in network mk-default-k8s-diff-port-153528: {Iface:virbr3 ExpiryTime:2024-02-29 19:58:02 +0000 UTC Type:0 Mac:52:54:00:78:ec:2b Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:default-k8s-diff-port-153528 Clientid:01:52:54:00:78:ec:2b}
	I0229 18:58:11.334112   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined IP address 192.168.39.210 and MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:11.334361   48088 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0229 18:58:11.339127   48088 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:58:11.353078   48088 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0229 18:58:11.353129   48088 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 18:58:11.392503   48088 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0229 18:58:11.392573   48088 ssh_runner.go:195] Run: which lz4
	I0229 18:58:11.398589   48088 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0229 18:58:11.405052   48088 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 18:58:11.405091   48088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0229 18:58:13.428402   48088 crio.go:444] Took 2.029836 seconds to copy over tarball
	I0229 18:58:13.428481   48088 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 18:58:10.215640   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:10.715115   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:11.215866   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:11.715307   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:12.215171   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:12.715206   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:13.215153   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:13.715048   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:14.215148   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:14.715628   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:11.084645   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:13.087354   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:13.011239   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:13.011724   47515 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:58:13.011751   47515 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:58:13.011676   49001 retry.go:31] will retry after 662.346902ms: waiting for machine to come up
	I0229 18:58:13.675622   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:13.676179   47515 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:58:13.676208   47515 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:58:13.676115   49001 retry.go:31] will retry after 761.484123ms: waiting for machine to come up
	I0229 18:58:14.439091   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:14.439599   47515 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:58:14.439626   47515 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:58:14.439546   49001 retry.go:31] will retry after 980.352556ms: waiting for machine to come up
	I0229 18:58:15.421962   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:15.422377   47515 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:58:15.422405   47515 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:58:15.422314   49001 retry.go:31] will retry after 1.134741057s: waiting for machine to come up
	I0229 18:58:16.558585   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:16.559071   47515 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:58:16.559097   47515 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:58:16.559005   49001 retry.go:31] will retry after 2.299052603s: waiting for machine to come up
	I0229 18:58:16.327243   48088 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.898733984s)
	I0229 18:58:16.327277   48088 crio.go:451] Took 2.898846 seconds to extract the tarball
	I0229 18:58:16.327289   48088 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 18:58:16.374029   48088 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 18:58:16.425625   48088 crio.go:496] all images are preloaded for cri-o runtime.
	I0229 18:58:16.425654   48088 cache_images.go:84] Images are preloaded, skipping loading
	I0229 18:58:16.425740   48088 ssh_runner.go:195] Run: crio config
	I0229 18:58:16.477353   48088 cni.go:84] Creating CNI manager for ""
	I0229 18:58:16.477382   48088 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 18:58:16.477406   48088 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 18:58:16.477447   48088 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.210 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-153528 NodeName:default-k8s-diff-port-153528 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.210"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.210 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 18:58:16.477595   48088 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.210
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-153528"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.210
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.210"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 18:58:16.477659   48088 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-153528 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.210
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-153528 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0229 18:58:16.477718   48088 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0229 18:58:16.489240   48088 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 18:58:16.489301   48088 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 18:58:16.500764   48088 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0229 18:58:16.522927   48088 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 18:58:16.543902   48088 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I0229 18:58:16.565262   48088 ssh_runner.go:195] Run: grep 192.168.39.210	control-plane.minikube.internal$ /etc/hosts
	I0229 18:58:16.571163   48088 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.210	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:58:16.585476   48088 certs.go:56] Setting up /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/default-k8s-diff-port-153528 for IP: 192.168.39.210
	I0229 18:58:16.585507   48088 certs.go:190] acquiring lock for shared ca certs: {Name:mk3505d2468da66ac20dc3ebf913782cecf1ba0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:58:16.585657   48088 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18259-6428/.minikube/ca.key
	I0229 18:58:16.585704   48088 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.key
	I0229 18:58:16.585772   48088 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/default-k8s-diff-port-153528/client.key
	I0229 18:58:16.647093   48088 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/default-k8s-diff-port-153528/apiserver.key.6213553a
	I0229 18:58:16.647194   48088 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/default-k8s-diff-port-153528/proxy-client.key
	I0229 18:58:16.647398   48088 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651.pem (1338 bytes)
	W0229 18:58:16.647463   48088 certs.go:433] ignoring /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651_empty.pem, impossibly tiny 0 bytes
	I0229 18:58:16.647476   48088 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem (1675 bytes)
	I0229 18:58:16.647501   48088 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem (1082 bytes)
	I0229 18:58:16.647527   48088 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem (1123 bytes)
	I0229 18:58:16.647553   48088 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem (1675 bytes)
	I0229 18:58:16.647591   48088 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem (1708 bytes)
	I0229 18:58:16.648235   48088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/default-k8s-diff-port-153528/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 18:58:16.678452   48088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/default-k8s-diff-port-153528/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0229 18:58:16.708360   48088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/default-k8s-diff-port-153528/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 18:58:16.740905   48088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/default-k8s-diff-port-153528/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 18:58:16.768820   48088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 18:58:16.799459   48088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0229 18:58:16.829488   48088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 18:58:16.860881   48088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0229 18:58:16.893064   48088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 18:58:16.923404   48088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651.pem --> /usr/share/ca-certificates/13651.pem (1338 bytes)
	I0229 18:58:16.952531   48088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem --> /usr/share/ca-certificates/136512.pem (1708 bytes)
	I0229 18:58:16.980895   48088 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 18:58:17.001306   48088 ssh_runner.go:195] Run: openssl version
	I0229 18:58:17.007995   48088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136512.pem && ln -fs /usr/share/ca-certificates/136512.pem /etc/ssl/certs/136512.pem"
	I0229 18:58:17.024000   48088 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136512.pem
	I0229 18:58:17.030471   48088 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 17:49 /usr/share/ca-certificates/136512.pem
	I0229 18:58:17.030544   48088 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136512.pem
	I0229 18:58:17.038306   48088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136512.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 18:58:17.050985   48088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 18:58:17.063089   48088 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:58:17.068437   48088 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 17:40 /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:58:17.068485   48088 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:58:17.075156   48088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 18:58:17.087015   48088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13651.pem && ln -fs /usr/share/ca-certificates/13651.pem /etc/ssl/certs/13651.pem"
	I0229 18:58:17.099964   48088 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13651.pem
	I0229 18:58:17.105272   48088 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 17:49 /usr/share/ca-certificates/13651.pem
	I0229 18:58:17.105333   48088 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13651.pem
	I0229 18:58:17.112447   48088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13651.pem /etc/ssl/certs/51391683.0"
	I0229 18:58:17.126499   48088 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 18:58:17.133216   48088 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0229 18:58:17.140320   48088 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0229 18:58:17.147900   48088 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0229 18:58:17.154931   48088 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0229 18:58:17.163552   48088 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0229 18:58:17.172256   48088 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0229 18:58:17.181349   48088 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-153528 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-153528 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.210 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:58:17.181481   48088 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0229 18:58:17.181554   48088 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 18:58:17.227444   48088 cri.go:89] found id: ""
	I0229 18:58:17.227532   48088 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 18:58:17.242533   48088 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0229 18:58:17.242562   48088 kubeadm.go:636] restartCluster start
	I0229 18:58:17.242616   48088 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0229 18:58:17.254713   48088 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:17.256305   48088 kubeconfig.go:92] found "default-k8s-diff-port-153528" server: "https://192.168.39.210:8444"
	I0229 18:58:17.259432   48088 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0229 18:58:17.281454   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:17.281525   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:17.295342   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:17.781719   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:17.781807   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:17.797462   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:18.281981   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:18.282082   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:18.300449   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:18.781952   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:18.782024   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:18.796641   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:15.215935   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:15.714969   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:16.215921   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:16.715200   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:17.215151   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:17.715520   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:18.215291   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:18.715662   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:19.215157   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:19.715037   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:15.585143   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:18.086077   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:18.861140   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:18.861635   47515 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:58:18.861658   47515 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:58:18.861584   49001 retry.go:31] will retry after 2.115098542s: waiting for machine to come up
	I0229 18:58:20.978165   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:20.978625   47515 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:58:20.978658   47515 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:58:20.978570   49001 retry.go:31] will retry after 3.520116791s: waiting for machine to come up
	I0229 18:58:19.282008   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:19.282093   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:19.297806   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:19.782384   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:19.782465   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:19.802496   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:20.281712   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:20.281777   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:20.298545   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:20.782139   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:20.782249   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:20.799615   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:21.282180   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:21.282288   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:21.297649   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:21.782263   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:21.782341   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:21.797537   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:22.282131   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:22.282211   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:22.303084   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:22.781558   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:22.781645   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:22.797155   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:23.281645   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:23.281727   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:23.296059   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:23.781581   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:23.781663   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:23.797132   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:20.215501   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:20.715745   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:21.214953   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:21.715762   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:22.215608   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:22.715556   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:23.215633   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:23.715012   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:24.215182   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:24.715944   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:20.585475   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:22.586962   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:25.082804   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:24.503134   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:24.503537   47515 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:58:24.503561   47515 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:58:24.503495   49001 retry.go:31] will retry after 3.056941725s: waiting for machine to come up
	I0229 18:58:27.562228   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:27.562698   47515 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:58:27.562729   47515 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:58:27.562650   49001 retry.go:31] will retry after 5.535128197s: waiting for machine to come up
	I0229 18:58:24.282207   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:24.282273   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:24.298683   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:24.781997   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:24.782088   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:24.796544   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:25.282137   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:25.282249   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:25.297916   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:25.782489   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:25.782605   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:25.800171   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:26.281679   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:26.281764   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:26.296395   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:26.781581   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:26.781700   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:26.796380   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:27.282230   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:27.282319   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:27.300719   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:27.300745   48088 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0229 18:58:27.300753   48088 kubeadm.go:1135] stopping kube-system containers ...
	I0229 18:58:27.300762   48088 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0229 18:58:27.300822   48088 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 18:58:27.344465   48088 cri.go:89] found id: ""
	I0229 18:58:27.344525   48088 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0229 18:58:27.367244   48088 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 18:58:27.379831   48088 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 18:58:27.379895   48088 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 18:58:27.390372   48088 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0229 18:58:27.390393   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:58:27.521441   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:58:28.070547   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:58:28.324425   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:58:28.416807   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:58:28.485785   48088 api_server.go:52] waiting for apiserver process to appear ...
	I0229 18:58:28.485880   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:28.986473   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:25.215272   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:25.715667   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:26.215566   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:26.715860   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:27.214993   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:27.715679   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:28.215093   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:28.715081   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:29.215188   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:29.715981   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:27.585150   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:29.585716   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:29.486136   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:29.512004   48088 api_server.go:72] duration metric: took 1.026225672s to wait for apiserver process to appear ...
	I0229 18:58:29.512036   48088 api_server.go:88] waiting for apiserver healthz status ...
	I0229 18:58:29.512081   48088 api_server.go:253] Checking apiserver healthz at https://192.168.39.210:8444/healthz ...
	I0229 18:58:29.512602   48088 api_server.go:269] stopped: https://192.168.39.210:8444/healthz: Get "https://192.168.39.210:8444/healthz": dial tcp 192.168.39.210:8444: connect: connection refused
	I0229 18:58:30.012197   48088 api_server.go:253] Checking apiserver healthz at https://192.168.39.210:8444/healthz ...
	I0229 18:58:33.076090   48088 api_server.go:279] https://192.168.39.210:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 18:58:33.076122   48088 api_server.go:103] status: https://192.168.39.210:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 18:58:33.076141   48088 api_server.go:253] Checking apiserver healthz at https://192.168.39.210:8444/healthz ...
	I0229 18:58:33.115044   48088 api_server.go:279] https://192.168.39.210:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 18:58:33.115082   48088 api_server.go:103] status: https://192.168.39.210:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 18:58:33.512305   48088 api_server.go:253] Checking apiserver healthz at https://192.168.39.210:8444/healthz ...
	I0229 18:58:33.518615   48088 api_server.go:279] https://192.168.39.210:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 18:58:33.518640   48088 api_server.go:103] status: https://192.168.39.210:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 18:58:34.012514   48088 api_server.go:253] Checking apiserver healthz at https://192.168.39.210:8444/healthz ...
	I0229 18:58:34.024771   48088 api_server.go:279] https://192.168.39.210:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 18:58:34.024809   48088 api_server.go:103] status: https://192.168.39.210:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 18:58:34.512427   48088 api_server.go:253] Checking apiserver healthz at https://192.168.39.210:8444/healthz ...
	I0229 18:58:34.519703   48088 api_server.go:279] https://192.168.39.210:8444/healthz returned 200:
	ok
	I0229 18:58:34.527814   48088 api_server.go:141] control plane version: v1.28.4
	I0229 18:58:34.527850   48088 api_server.go:131] duration metric: took 5.015799681s to wait for apiserver health ...
	I0229 18:58:34.527862   48088 cni.go:84] Creating CNI manager for ""
	I0229 18:58:34.527869   48088 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 18:58:34.529573   48088 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 18:58:30.215544   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:30.715080   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:31.215386   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:31.715180   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:32.215078   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:32.715087   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:33.215842   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:33.714950   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:34.215778   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:34.715201   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:32.084243   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:34.087247   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:33.099983   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:33.100527   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has current primary IP address 192.168.50.72 and MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:33.100548   47515 main.go:141] libmachine: (no-preload-247197) Found IP for machine: 192.168.50.72
	I0229 18:58:33.100584   47515 main.go:141] libmachine: (no-preload-247197) Reserving static IP address...
	I0229 18:58:33.100959   47515 main.go:141] libmachine: (no-preload-247197) DBG | found host DHCP lease matching {name: "no-preload-247197", mac: "52:54:00:2c:2f:53", ip: "192.168.50.72"} in network mk-no-preload-247197: {Iface:virbr2 ExpiryTime:2024-02-29 19:58:22 +0000 UTC Type:0 Mac:52:54:00:2c:2f:53 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:no-preload-247197 Clientid:01:52:54:00:2c:2f:53}
	I0229 18:58:33.100985   47515 main.go:141] libmachine: (no-preload-247197) DBG | skip adding static IP to network mk-no-preload-247197 - found existing host DHCP lease matching {name: "no-preload-247197", mac: "52:54:00:2c:2f:53", ip: "192.168.50.72"}
	I0229 18:58:33.100999   47515 main.go:141] libmachine: (no-preload-247197) Reserved static IP address: 192.168.50.72
	I0229 18:58:33.101016   47515 main.go:141] libmachine: (no-preload-247197) Waiting for SSH to be available...
	I0229 18:58:33.101057   47515 main.go:141] libmachine: (no-preload-247197) DBG | Getting to WaitForSSH function...
	I0229 18:58:33.103439   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:33.103766   47515 main.go:141] libmachine: (no-preload-247197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2f:53", ip: ""} in network mk-no-preload-247197: {Iface:virbr2 ExpiryTime:2024-02-29 19:58:22 +0000 UTC Type:0 Mac:52:54:00:2c:2f:53 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:no-preload-247197 Clientid:01:52:54:00:2c:2f:53}
	I0229 18:58:33.103817   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined IP address 192.168.50.72 and MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:33.104041   47515 main.go:141] libmachine: (no-preload-247197) DBG | Using SSH client type: external
	I0229 18:58:33.104069   47515 main.go:141] libmachine: (no-preload-247197) DBG | Using SSH private key: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/no-preload-247197/id_rsa (-rw-------)
	I0229 18:58:33.104110   47515 main.go:141] libmachine: (no-preload-247197) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.72 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18259-6428/.minikube/machines/no-preload-247197/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 18:58:33.104131   47515 main.go:141] libmachine: (no-preload-247197) DBG | About to run SSH command:
	I0229 18:58:33.104145   47515 main.go:141] libmachine: (no-preload-247197) DBG | exit 0
	I0229 18:58:33.240401   47515 main.go:141] libmachine: (no-preload-247197) DBG | SSH cmd err, output: <nil>: 
	I0229 18:58:33.240811   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetConfigRaw
	I0229 18:58:33.241500   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetIP
	I0229 18:58:33.244578   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:33.244970   47515 main.go:141] libmachine: (no-preload-247197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2f:53", ip: ""} in network mk-no-preload-247197: {Iface:virbr2 ExpiryTime:2024-02-29 19:58:22 +0000 UTC Type:0 Mac:52:54:00:2c:2f:53 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:no-preload-247197 Clientid:01:52:54:00:2c:2f:53}
	I0229 18:58:33.245002   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined IP address 192.168.50.72 and MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:33.245358   47515 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/no-preload-247197/config.json ...
	I0229 18:58:33.245522   47515 machine.go:88] provisioning docker machine ...
	I0229 18:58:33.245542   47515 main.go:141] libmachine: (no-preload-247197) Calling .DriverName
	I0229 18:58:33.245755   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetMachineName
	I0229 18:58:33.245935   47515 buildroot.go:166] provisioning hostname "no-preload-247197"
	I0229 18:58:33.245977   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetMachineName
	I0229 18:58:33.246175   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHHostname
	I0229 18:58:33.248841   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:33.249263   47515 main.go:141] libmachine: (no-preload-247197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2f:53", ip: ""} in network mk-no-preload-247197: {Iface:virbr2 ExpiryTime:2024-02-29 19:58:22 +0000 UTC Type:0 Mac:52:54:00:2c:2f:53 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:no-preload-247197 Clientid:01:52:54:00:2c:2f:53}
	I0229 18:58:33.249284   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined IP address 192.168.50.72 and MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:33.249458   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHPort
	I0229 18:58:33.249629   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHKeyPath
	I0229 18:58:33.249767   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHKeyPath
	I0229 18:58:33.249946   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHUsername
	I0229 18:58:33.250120   47515 main.go:141] libmachine: Using SSH client type: native
	I0229 18:58:33.250335   47515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.72 22 <nil> <nil>}
	I0229 18:58:33.250351   47515 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-247197 && echo "no-preload-247197" | sudo tee /etc/hostname
	I0229 18:58:33.386175   47515 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-247197
	
	I0229 18:58:33.386210   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHHostname
	I0229 18:58:33.389491   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:33.389909   47515 main.go:141] libmachine: (no-preload-247197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2f:53", ip: ""} in network mk-no-preload-247197: {Iface:virbr2 ExpiryTime:2024-02-29 19:58:22 +0000 UTC Type:0 Mac:52:54:00:2c:2f:53 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:no-preload-247197 Clientid:01:52:54:00:2c:2f:53}
	I0229 18:58:33.389950   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined IP address 192.168.50.72 and MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:33.390080   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHPort
	I0229 18:58:33.390288   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHKeyPath
	I0229 18:58:33.390495   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHKeyPath
	I0229 18:58:33.390678   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHUsername
	I0229 18:58:33.390844   47515 main.go:141] libmachine: Using SSH client type: native
	I0229 18:58:33.391058   47515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.72 22 <nil> <nil>}
	I0229 18:58:33.391090   47515 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-247197' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-247197/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-247197' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 18:58:33.510209   47515 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 18:58:33.510243   47515 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18259-6428/.minikube CaCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18259-6428/.minikube}
	I0229 18:58:33.510263   47515 buildroot.go:174] setting up certificates
	I0229 18:58:33.510273   47515 provision.go:83] configureAuth start
	I0229 18:58:33.510281   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetMachineName
	I0229 18:58:33.510582   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetIP
	I0229 18:58:33.513357   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:33.513741   47515 main.go:141] libmachine: (no-preload-247197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2f:53", ip: ""} in network mk-no-preload-247197: {Iface:virbr2 ExpiryTime:2024-02-29 19:58:22 +0000 UTC Type:0 Mac:52:54:00:2c:2f:53 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:no-preload-247197 Clientid:01:52:54:00:2c:2f:53}
	I0229 18:58:33.513769   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined IP address 192.168.50.72 and MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:33.513936   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHHostname
	I0229 18:58:33.516227   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:33.516513   47515 main.go:141] libmachine: (no-preload-247197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2f:53", ip: ""} in network mk-no-preload-247197: {Iface:virbr2 ExpiryTime:2024-02-29 19:58:22 +0000 UTC Type:0 Mac:52:54:00:2c:2f:53 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:no-preload-247197 Clientid:01:52:54:00:2c:2f:53}
	I0229 18:58:33.516543   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined IP address 192.168.50.72 and MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:33.516700   47515 provision.go:138] copyHostCerts
	I0229 18:58:33.516746   47515 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem, removing ...
	I0229 18:58:33.516761   47515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem
	I0229 18:58:33.516824   47515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem (1123 bytes)
	I0229 18:58:33.516931   47515 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem, removing ...
	I0229 18:58:33.516952   47515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem
	I0229 18:58:33.516987   47515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem (1675 bytes)
	I0229 18:58:33.517066   47515 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem, removing ...
	I0229 18:58:33.517077   47515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem
	I0229 18:58:33.517106   47515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem (1082 bytes)
	I0229 18:58:33.517181   47515 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem org=jenkins.no-preload-247197 san=[192.168.50.72 192.168.50.72 localhost 127.0.0.1 minikube no-preload-247197]
	I0229 18:58:33.651858   47515 provision.go:172] copyRemoteCerts
	I0229 18:58:33.651914   47515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 18:58:33.651936   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHHostname
	I0229 18:58:33.655072   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:33.655551   47515 main.go:141] libmachine: (no-preload-247197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2f:53", ip: ""} in network mk-no-preload-247197: {Iface:virbr2 ExpiryTime:2024-02-29 19:58:22 +0000 UTC Type:0 Mac:52:54:00:2c:2f:53 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:no-preload-247197 Clientid:01:52:54:00:2c:2f:53}
	I0229 18:58:33.655584   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined IP address 192.168.50.72 and MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:33.655776   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHPort
	I0229 18:58:33.655952   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHKeyPath
	I0229 18:58:33.656103   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHUsername
	I0229 18:58:33.656277   47515 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/no-preload-247197/id_rsa Username:docker}
	I0229 18:58:33.747197   47515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0229 18:58:33.776690   47515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0229 18:58:33.804404   47515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 18:58:33.831068   47515 provision.go:86] duration metric: configureAuth took 320.782451ms
	I0229 18:58:33.831105   47515 buildroot.go:189] setting minikube options for container-runtime
	I0229 18:58:33.831336   47515 config.go:182] Loaded profile config "no-preload-247197": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0229 18:58:33.831469   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHHostname
	I0229 18:58:33.834209   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:33.834617   47515 main.go:141] libmachine: (no-preload-247197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2f:53", ip: ""} in network mk-no-preload-247197: {Iface:virbr2 ExpiryTime:2024-02-29 19:58:22 +0000 UTC Type:0 Mac:52:54:00:2c:2f:53 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:no-preload-247197 Clientid:01:52:54:00:2c:2f:53}
	I0229 18:58:33.834650   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined IP address 192.168.50.72 and MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:33.834845   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHPort
	I0229 18:58:33.835046   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHKeyPath
	I0229 18:58:33.835215   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHKeyPath
	I0229 18:58:33.835343   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHUsername
	I0229 18:58:33.835503   47515 main.go:141] libmachine: Using SSH client type: native
	I0229 18:58:33.835694   47515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.72 22 <nil> <nil>}
	I0229 18:58:33.835717   47515 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0229 18:58:34.141350   47515 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0229 18:58:34.141372   47515 machine.go:91] provisioned docker machine in 895.837431ms
	I0229 18:58:34.141385   47515 start.go:300] post-start starting for "no-preload-247197" (driver="kvm2")
	I0229 18:58:34.141399   47515 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 18:58:34.141422   47515 main.go:141] libmachine: (no-preload-247197) Calling .DriverName
	I0229 18:58:34.141763   47515 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 18:58:34.141800   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHHostname
	I0229 18:58:34.144673   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:34.145078   47515 main.go:141] libmachine: (no-preload-247197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2f:53", ip: ""} in network mk-no-preload-247197: {Iface:virbr2 ExpiryTime:2024-02-29 19:58:22 +0000 UTC Type:0 Mac:52:54:00:2c:2f:53 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:no-preload-247197 Clientid:01:52:54:00:2c:2f:53}
	I0229 18:58:34.145106   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined IP address 192.168.50.72 and MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:34.145225   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHPort
	I0229 18:58:34.145387   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHKeyPath
	I0229 18:58:34.145509   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHUsername
	I0229 18:58:34.145618   47515 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/no-preload-247197/id_rsa Username:docker}
	I0229 18:58:34.241817   47515 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 18:58:34.247096   47515 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 18:58:34.247125   47515 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6428/.minikube/addons for local assets ...
	I0229 18:58:34.247200   47515 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6428/.minikube/files for local assets ...
	I0229 18:58:34.247294   47515 filesync.go:149] local asset: /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem -> 136512.pem in /etc/ssl/certs
	I0229 18:58:34.247386   47515 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 18:58:34.261959   47515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem --> /etc/ssl/certs/136512.pem (1708 bytes)
	I0229 18:58:34.293974   47515 start.go:303] post-start completed in 152.574202ms
	I0229 18:58:34.294000   47515 fix.go:56] fixHost completed within 24.533673806s
	I0229 18:58:34.294031   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHHostname
	I0229 18:58:34.297066   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:34.297455   47515 main.go:141] libmachine: (no-preload-247197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2f:53", ip: ""} in network mk-no-preload-247197: {Iface:virbr2 ExpiryTime:2024-02-29 19:58:22 +0000 UTC Type:0 Mac:52:54:00:2c:2f:53 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:no-preload-247197 Clientid:01:52:54:00:2c:2f:53}
	I0229 18:58:34.297480   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined IP address 192.168.50.72 and MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:34.297685   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHPort
	I0229 18:58:34.297865   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHKeyPath
	I0229 18:58:34.298064   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHKeyPath
	I0229 18:58:34.298256   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHUsername
	I0229 18:58:34.298448   47515 main.go:141] libmachine: Using SSH client type: native
	I0229 18:58:34.298671   47515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.72 22 <nil> <nil>}
	I0229 18:58:34.298687   47515 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 18:58:34.416701   47515 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709233114.391433365
	
	I0229 18:58:34.416724   47515 fix.go:206] guest clock: 1709233114.391433365
	I0229 18:58:34.416733   47515 fix.go:219] Guest: 2024-02-29 18:58:34.391433365 +0000 UTC Remote: 2024-02-29 18:58:34.294005249 +0000 UTC m=+366.458860154 (delta=97.428116ms)
	I0229 18:58:34.416763   47515 fix.go:190] guest clock delta is within tolerance: 97.428116ms
	I0229 18:58:34.416770   47515 start.go:83] releasing machines lock for "no-preload-247197", held for 24.656479144s
	I0229 18:58:34.416795   47515 main.go:141] libmachine: (no-preload-247197) Calling .DriverName
	I0229 18:58:34.417031   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetIP
	I0229 18:58:34.419713   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:34.420098   47515 main.go:141] libmachine: (no-preload-247197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2f:53", ip: ""} in network mk-no-preload-247197: {Iface:virbr2 ExpiryTime:2024-02-29 19:58:22 +0000 UTC Type:0 Mac:52:54:00:2c:2f:53 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:no-preload-247197 Clientid:01:52:54:00:2c:2f:53}
	I0229 18:58:34.420129   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined IP address 192.168.50.72 and MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:34.420288   47515 main.go:141] libmachine: (no-preload-247197) Calling .DriverName
	I0229 18:58:34.420789   47515 main.go:141] libmachine: (no-preload-247197) Calling .DriverName
	I0229 18:58:34.420989   47515 main.go:141] libmachine: (no-preload-247197) Calling .DriverName
	I0229 18:58:34.421076   47515 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 18:58:34.421125   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHHostname
	I0229 18:58:34.421239   47515 ssh_runner.go:195] Run: cat /version.json
	I0229 18:58:34.421268   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHHostname
	I0229 18:58:34.424047   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:34.424359   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:34.424399   47515 main.go:141] libmachine: (no-preload-247197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2f:53", ip: ""} in network mk-no-preload-247197: {Iface:virbr2 ExpiryTime:2024-02-29 19:58:22 +0000 UTC Type:0 Mac:52:54:00:2c:2f:53 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:no-preload-247197 Clientid:01:52:54:00:2c:2f:53}
	I0229 18:58:34.424418   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined IP address 192.168.50.72 and MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:34.424564   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHPort
	I0229 18:58:34.424731   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHKeyPath
	I0229 18:58:34.424803   47515 main.go:141] libmachine: (no-preload-247197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2f:53", ip: ""} in network mk-no-preload-247197: {Iface:virbr2 ExpiryTime:2024-02-29 19:58:22 +0000 UTC Type:0 Mac:52:54:00:2c:2f:53 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:no-preload-247197 Clientid:01:52:54:00:2c:2f:53}
	I0229 18:58:34.424829   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined IP address 192.168.50.72 and MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:34.424969   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHUsername
	I0229 18:58:34.425124   47515 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/no-preload-247197/id_rsa Username:docker}
	I0229 18:58:34.425217   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHPort
	I0229 18:58:34.425348   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHKeyPath
	I0229 18:58:34.425506   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHUsername
	I0229 18:58:34.425705   47515 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/no-preload-247197/id_rsa Username:docker}
	I0229 18:58:34.505253   47515 ssh_runner.go:195] Run: systemctl --version
	I0229 18:58:34.533780   47515 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0229 18:58:34.696609   47515 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 18:58:34.703768   47515 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 18:58:34.703848   47515 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 18:58:34.723243   47515 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 18:58:34.723271   47515 start.go:475] detecting cgroup driver to use...
	I0229 18:58:34.723342   47515 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 18:58:34.743696   47515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 18:58:34.760022   47515 docker.go:217] disabling cri-docker service (if available) ...
	I0229 18:58:34.760085   47515 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 18:58:34.775217   47515 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 18:58:34.791576   47515 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 18:58:34.920544   47515 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 18:58:35.093684   47515 docker.go:233] disabling docker service ...
	I0229 18:58:35.093760   47515 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 18:58:35.112349   47515 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 18:58:35.128145   47515 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 18:58:35.246120   47515 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 18:58:35.363110   47515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 18:58:35.378087   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 18:58:35.399610   47515 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0229 18:58:35.399658   47515 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:58:35.410579   47515 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0229 18:58:35.410624   47515 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:58:35.421664   47515 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:58:35.432726   47515 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:58:35.443728   47515 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 18:58:35.455072   47515 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 18:58:35.467211   47515 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 18:58:35.467263   47515 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0229 18:58:35.480669   47515 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 18:58:35.491649   47515 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:58:35.621272   47515 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0229 18:58:35.793148   47515 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0229 18:58:35.793225   47515 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0229 18:58:35.798495   47515 start.go:543] Will wait 60s for crictl version
	I0229 18:58:35.798556   47515 ssh_runner.go:195] Run: which crictl
	I0229 18:58:35.803756   47515 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 18:58:35.848168   47515 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0229 18:58:35.848259   47515 ssh_runner.go:195] Run: crio --version
	I0229 18:58:35.879346   47515 ssh_runner.go:195] Run: crio --version
	I0229 18:58:35.911939   47515 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0229 18:58:35.913174   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetIP
	I0229 18:58:35.915761   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:35.916134   47515 main.go:141] libmachine: (no-preload-247197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2f:53", ip: ""} in network mk-no-preload-247197: {Iface:virbr2 ExpiryTime:2024-02-29 19:58:22 +0000 UTC Type:0 Mac:52:54:00:2c:2f:53 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:no-preload-247197 Clientid:01:52:54:00:2c:2f:53}
	I0229 18:58:35.916162   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined IP address 192.168.50.72 and MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:35.916350   47515 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0229 18:58:35.921206   47515 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:58:35.936342   47515 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0229 18:58:35.936375   47515 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 18:58:35.974456   47515 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0229 18:58:35.974475   47515 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0229 18:58:35.974509   47515 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:58:35.974546   47515 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0229 18:58:35.974567   47515 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0229 18:58:35.974613   47515 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0229 18:58:35.974668   47515 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0229 18:58:35.974733   47515 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0229 18:58:35.974780   47515 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0229 18:58:35.975073   47515 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0229 18:58:35.975958   47515 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:58:35.975981   47515 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0229 18:58:35.975993   47515 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0229 18:58:35.976002   47515 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0229 18:58:35.976027   47515 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0229 18:58:35.975963   47515 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0229 18:58:35.975959   47515 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0229 18:58:35.976249   47515 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0229 18:58:36.111205   47515 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0229 18:58:36.124071   47515 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0229 18:58:36.150002   47515 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0229 18:58:36.196158   47515 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0229 18:58:36.258361   47515 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0229 18:58:36.273898   47515 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0229 18:58:36.283390   47515 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0229 18:58:36.336487   47515 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0229 18:58:36.336531   47515 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0229 18:58:36.336541   47515 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0229 18:58:36.336577   47515 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0229 18:58:36.336590   47515 ssh_runner.go:195] Run: which crictl
	I0229 18:58:36.336620   47515 ssh_runner.go:195] Run: which crictl
	I0229 18:58:36.336636   47515 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0229 18:58:36.336661   47515 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0229 18:58:36.336670   47515 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0229 18:58:36.336695   47515 ssh_runner.go:195] Run: which crictl
	I0229 18:58:36.336697   47515 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0229 18:58:36.336723   47515 ssh_runner.go:195] Run: which crictl
	I0229 18:58:36.383302   47515 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0229 18:58:36.383347   47515 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0229 18:58:36.383402   47515 ssh_runner.go:195] Run: which crictl
	I0229 18:58:36.393420   47515 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0229 18:58:36.393444   47515 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0229 18:58:36.393459   47515 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0229 18:58:36.393495   47515 ssh_runner.go:195] Run: which crictl
	I0229 18:58:36.393527   47515 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0229 18:58:36.393579   47515 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0229 18:58:36.393612   47515 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0229 18:58:36.393665   47515 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0229 18:58:36.503611   47515 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0229 18:58:36.503707   47515 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0229 18:58:36.508306   47515 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0229 18:58:36.508403   47515 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0229 18:58:36.511536   47515 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0229 18:58:36.511600   47515 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0229 18:58:36.511636   47515 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0229 18:58:36.511706   47515 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0229 18:58:36.511721   47515 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0229 18:58:36.511749   47515 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0229 18:58:36.511781   47515 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0229 18:58:36.522392   47515 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0229 18:58:36.522413   47515 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0229 18:58:36.522458   47515 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0229 18:58:36.522645   47515 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0229 18:58:36.523319   47515 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0229 18:58:36.529871   47515 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0229 18:58:36.576922   47515 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0229 18:58:36.576994   47515 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0229 18:58:36.577093   47515 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0229 18:58:36.892014   47515 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:58:34.530886   48088 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 18:58:34.547233   48088 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 18:58:34.572237   48088 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 18:58:34.586775   48088 system_pods.go:59] 8 kube-system pods found
	I0229 18:58:34.586816   48088 system_pods.go:61] "coredns-5dd5756b68-tr4nn" [016aff45-17c3-4278-a7f3-1e0a5770f1d4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 18:58:34.586827   48088 system_pods.go:61] "etcd-default-k8s-diff-port-153528" [829f38ad-e4e4-434d-8da6-dde64deeb1ec] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0229 18:58:34.586837   48088 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-153528" [e27986e6-58a2-4acc-8a41-d4662ce0848d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0229 18:58:34.586853   48088 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-153528" [fb77dff9-141e-495f-9be8-f570f9387bf3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0229 18:58:34.586868   48088 system_pods.go:61] "kube-proxy-fwqsv" [af8cd0ff-71dd-44d4-8918-496e27654cbf] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0229 18:58:34.586887   48088 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-153528" [a325ec8e-4613-4447-87b1-c23b5b614352] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0229 18:58:34.586898   48088 system_pods.go:61] "metrics-server-57f55c9bc5-226bj" [80d7a4c6-e9b5-4324-8c61-489a874a9e79] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 18:58:34.586910   48088 system_pods.go:61] "storage-provisioner" [4270d9ce-329f-4531-9563-65a398f8b168] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0229 18:58:34.586919   48088 system_pods.go:74] duration metric: took 14.657543ms to wait for pod list to return data ...
	I0229 18:58:34.586932   48088 node_conditions.go:102] verifying NodePressure condition ...
	I0229 18:58:34.595109   48088 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 18:58:34.595144   48088 node_conditions.go:123] node cpu capacity is 2
	I0229 18:58:34.595158   48088 node_conditions.go:105] duration metric: took 8.219984ms to run NodePressure ...
	I0229 18:58:34.595179   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:58:34.946493   48088 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0229 18:58:34.951066   48088 kubeadm.go:787] kubelet initialised
	I0229 18:58:34.951088   48088 kubeadm.go:788] duration metric: took 4.569338ms waiting for restarted kubelet to initialise ...
	I0229 18:58:34.951098   48088 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 18:58:34.956637   48088 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-tr4nn" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:36.967075   48088 pod_ready.go:102] pod "coredns-5dd5756b68-tr4nn" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:35.215815   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:35.715203   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:36.215521   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:36.715525   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:37.215610   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:37.715474   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:38.215208   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:38.714993   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:39.215128   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:39.715944   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:36.584041   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:38.584897   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:38.722817   47515 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.20033311s)
	I0229 18:58:38.722904   47515 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0229 18:58:38.722923   47515 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.830873001s)
	I0229 18:58:38.722981   47515 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0229 18:58:38.723016   47515 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:58:38.722938   47515 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0229 18:58:38.723083   47515 ssh_runner.go:195] Run: which crictl
	I0229 18:58:38.723104   47515 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0229 18:58:38.722872   47515 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0: (2.145756086s)
	I0229 18:58:38.723163   47515 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0229 18:58:38.728297   47515 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:58:42.131683   47515 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.403360461s)
	I0229 18:58:42.131729   47515 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0229 18:58:42.131819   47515 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (3.408694108s)
	I0229 18:58:42.131839   47515 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0229 18:58:42.131823   47515 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0229 18:58:42.131862   47515 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0229 18:58:42.131903   47515 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0229 18:58:39.465588   48088 pod_ready.go:102] pod "coredns-5dd5756b68-tr4nn" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:41.473698   48088 pod_ready.go:102] pod "coredns-5dd5756b68-tr4nn" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:42.965252   48088 pod_ready.go:92] pod "coredns-5dd5756b68-tr4nn" in "kube-system" namespace has status "Ready":"True"
	I0229 18:58:42.965281   48088 pod_ready.go:81] duration metric: took 8.008622438s waiting for pod "coredns-5dd5756b68-tr4nn" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:42.965293   48088 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-153528" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:42.977865   48088 pod_ready.go:92] pod "etcd-default-k8s-diff-port-153528" in "kube-system" namespace has status "Ready":"True"
	I0229 18:58:42.977888   48088 pod_ready.go:81] duration metric: took 12.586144ms waiting for pod "etcd-default-k8s-diff-port-153528" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:42.977900   48088 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-153528" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:43.486518   48088 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-153528" in "kube-system" namespace has status "Ready":"True"
	I0229 18:58:43.486545   48088 pod_ready.go:81] duration metric: took 508.631346ms waiting for pod "kube-apiserver-default-k8s-diff-port-153528" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:43.486554   48088 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-153528" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:40.215679   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:40.715898   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:41.215271   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:41.715702   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:42.214943   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:42.715085   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:43.215196   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:43.715164   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:44.215580   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:44.715155   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:41.082209   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:43.089104   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:45.101973   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:43.991872   47515 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.859995098s)
	I0229 18:58:43.991921   47515 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0229 18:58:43.992104   47515 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.860178579s)
	I0229 18:58:43.992159   47515 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0229 18:58:43.992190   47515 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0229 18:58:43.992238   47515 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0229 18:58:45.454368   47515 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.462102352s)
	I0229 18:58:45.454407   47515 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0229 18:58:45.454436   47515 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0229 18:58:45.454567   47515 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0229 18:58:45.493014   48088 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-153528" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:46.493937   48088 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-153528" in "kube-system" namespace has status "Ready":"True"
	I0229 18:58:46.493969   48088 pod_ready.go:81] duration metric: took 3.007406763s waiting for pod "kube-controller-manager-default-k8s-diff-port-153528" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:46.493982   48088 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-fwqsv" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:46.499157   48088 pod_ready.go:92] pod "kube-proxy-fwqsv" in "kube-system" namespace has status "Ready":"True"
	I0229 18:58:46.499177   48088 pod_ready.go:81] duration metric: took 5.187224ms waiting for pod "kube-proxy-fwqsv" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:46.499188   48088 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-153528" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:48.006573   48088 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-153528" in "kube-system" namespace has status "Ready":"True"
	I0229 18:58:48.006600   48088 pod_ready.go:81] duration metric: took 1.507402889s waiting for pod "kube-scheduler-default-k8s-diff-port-153528" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:48.006612   48088 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:45.215722   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:45.715879   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:46.215457   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:46.715123   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:47.216000   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:47.715056   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:48.215140   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:48.715448   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:49.215722   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:49.715058   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:47.586794   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:50.084118   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:48.118942   47515 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.664337971s)
	I0229 18:58:48.118983   47515 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0229 18:58:48.119010   47515 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0229 18:58:48.119086   47515 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0229 18:58:52.117429   47515 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.998319742s)
	I0229 18:58:52.117462   47515 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0229 18:58:52.117488   47515 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0229 18:58:52.117538   47515 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0229 18:58:50.015404   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:52.515203   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:50.214969   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:50.715535   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:51.215238   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:51.715704   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:52.215238   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:52.715897   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:53.215106   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:53.715753   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:54.215737   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:54.715449   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:52.084871   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:54.582435   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:53.079184   47515 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0229 18:58:53.079224   47515 cache_images.go:123] Successfully loaded all cached images
	I0229 18:58:53.079231   47515 cache_images.go:92] LoadImages completed in 17.104746432s
	I0229 18:58:53.079303   47515 ssh_runner.go:195] Run: crio config
	I0229 18:58:53.126378   47515 cni.go:84] Creating CNI manager for ""
	I0229 18:58:53.126400   47515 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 18:58:53.126417   47515 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 18:58:53.126434   47515 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.72 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-247197 NodeName:no-preload-247197 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.72"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.72 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 18:58:53.126583   47515 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.72
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-247197"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.72
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.72"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 18:58:53.126643   47515 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-247197 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.72
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-247197 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 18:58:53.126692   47515 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0229 18:58:53.141044   47515 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 18:58:53.141117   47515 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 18:58:53.153167   47515 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (381 bytes)
	I0229 18:58:53.173724   47515 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0229 18:58:53.192645   47515 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0229 18:58:53.212004   47515 ssh_runner.go:195] Run: grep 192.168.50.72	control-plane.minikube.internal$ /etc/hosts
	I0229 18:58:53.216443   47515 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.72	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:58:53.233319   47515 certs.go:56] Setting up /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/no-preload-247197 for IP: 192.168.50.72
	I0229 18:58:53.233353   47515 certs.go:190] acquiring lock for shared ca certs: {Name:mk3505d2468da66ac20dc3ebf913782cecf1ba0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:58:53.233510   47515 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18259-6428/.minikube/ca.key
	I0229 18:58:53.233568   47515 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.key
	I0229 18:58:53.233680   47515 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/no-preload-247197/client.key
	I0229 18:58:53.233763   47515 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/no-preload-247197/apiserver.key.7c8fc674
	I0229 18:58:53.233803   47515 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/no-preload-247197/proxy-client.key
	I0229 18:58:53.233915   47515 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651.pem (1338 bytes)
	W0229 18:58:53.233942   47515 certs.go:433] ignoring /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651_empty.pem, impossibly tiny 0 bytes
	I0229 18:58:53.233948   47515 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem (1675 bytes)
	I0229 18:58:53.233971   47515 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem (1082 bytes)
	I0229 18:58:53.233991   47515 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem (1123 bytes)
	I0229 18:58:53.234011   47515 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem (1675 bytes)
	I0229 18:58:53.234050   47515 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem (1708 bytes)
	I0229 18:58:53.234710   47515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/no-preload-247197/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 18:58:53.264093   47515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/no-preload-247197/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0229 18:58:53.290793   47515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/no-preload-247197/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 18:58:53.319206   47515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/no-preload-247197/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 18:58:53.346074   47515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 18:58:53.373754   47515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0229 18:58:53.402222   47515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 18:58:53.430685   47515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0229 18:58:53.458589   47515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651.pem --> /usr/share/ca-certificates/13651.pem (1338 bytes)
	I0229 18:58:53.485553   47515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem --> /usr/share/ca-certificates/136512.pem (1708 bytes)
	I0229 18:58:53.513594   47515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 18:58:53.542588   47515 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 18:58:53.562935   47515 ssh_runner.go:195] Run: openssl version
	I0229 18:58:53.571313   47515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13651.pem && ln -fs /usr/share/ca-certificates/13651.pem /etc/ssl/certs/13651.pem"
	I0229 18:58:53.586708   47515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13651.pem
	I0229 18:58:53.592585   47515 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 17:49 /usr/share/ca-certificates/13651.pem
	I0229 18:58:53.592682   47515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13651.pem
	I0229 18:58:53.600135   47515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13651.pem /etc/ssl/certs/51391683.0"
	I0229 18:58:53.614410   47515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136512.pem && ln -fs /usr/share/ca-certificates/136512.pem /etc/ssl/certs/136512.pem"
	I0229 18:58:53.627733   47515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136512.pem
	I0229 18:58:53.632869   47515 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 17:49 /usr/share/ca-certificates/136512.pem
	I0229 18:58:53.632926   47515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136512.pem
	I0229 18:58:53.639973   47515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136512.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 18:58:53.654090   47515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 18:58:53.667714   47515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:58:53.672987   47515 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 17:40 /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:58:53.673046   47515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:58:53.679806   47515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 18:58:53.692846   47515 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 18:58:53.697764   47515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0229 18:58:53.704678   47515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0229 18:58:53.711070   47515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0229 18:58:53.717607   47515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0229 18:58:53.724048   47515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0229 18:58:53.731138   47515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0229 18:58:53.737875   47515 kubeadm.go:404] StartCluster: {Name:no-preload-247197 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-247197 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.72 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:58:53.737981   47515 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0229 18:58:53.738028   47515 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 18:58:53.777952   47515 cri.go:89] found id: ""
	I0229 18:58:53.778016   47515 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 18:58:53.790323   47515 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0229 18:58:53.790342   47515 kubeadm.go:636] restartCluster start
	I0229 18:58:53.790397   47515 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0229 18:58:53.801812   47515 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:53.803203   47515 kubeconfig.go:92] found "no-preload-247197" server: "https://192.168.50.72:8443"
	I0229 18:58:53.806252   47515 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0229 18:58:53.817542   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:58:53.817601   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:53.831702   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:54.318196   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:58:54.318261   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:54.332586   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:54.818521   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:58:54.818617   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:54.835279   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:55.317681   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:58:55.317760   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:55.334156   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:55.818654   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:58:55.818761   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:55.834435   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:56.317800   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:58:56.317923   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:56.333149   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:56.817667   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:58:56.817776   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:56.832497   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:57.318058   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:58:57.318173   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:57.332672   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:57.818372   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:58:57.818477   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:57.834669   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:55.015453   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:57.513580   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:55.215634   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:55.715221   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:56.215582   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:56.715580   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:57.215652   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:57.715281   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:58.215656   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:58.715759   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:59.216000   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:59.714984   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:56.583205   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:59.083761   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:58.318525   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:58:58.318595   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:58.334704   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:58.818249   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:58:58.818360   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:58.834221   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:59.318385   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:58:59.318489   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:59.334283   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:59.818167   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:58:59.818231   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:59.834310   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:59:00.317793   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:59:00.317904   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:59:00.334063   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:59:00.817623   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:59:00.817702   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:59:00.832855   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:59:01.318481   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:59:01.318569   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:59:01.333716   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:59:01.818312   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:59:01.818413   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:59:01.834094   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:59:02.317571   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:59:02.317680   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:59:02.332422   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:59:02.817947   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:59:02.818044   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:59:02.834339   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:59.514446   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:02.015881   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:00.215747   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:00.715123   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:01.214978   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:01.715726   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:02.215092   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:02.715148   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:03.215149   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:03.715717   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:04.215830   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:04.715275   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:01.084277   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:03.583278   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:03.318317   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:59:03.318410   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:59:03.334824   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:59:03.818569   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:59:03.818652   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:59:03.834206   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:59:03.834235   47515 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0229 18:59:03.834244   47515 kubeadm.go:1135] stopping kube-system containers ...
	I0229 18:59:03.834255   47515 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0229 18:59:03.834306   47515 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 18:59:03.877464   47515 cri.go:89] found id: ""
	I0229 18:59:03.877543   47515 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0229 18:59:03.901093   47515 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 18:59:03.912185   47515 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 18:59:03.912237   47515 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 18:59:03.923685   47515 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0229 18:59:03.923706   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:59:04.037753   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:59:05.127681   47515 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.089896164s)
	I0229 18:59:05.127710   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:59:05.363326   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:59:05.447053   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:59:05.525183   47515 api_server.go:52] waiting for apiserver process to appear ...
	I0229 18:59:05.525276   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:06.026071   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:06.525747   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:07.026103   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:07.043681   47515 api_server.go:72] duration metric: took 1.518498943s to wait for apiserver process to appear ...
	I0229 18:59:07.043706   47515 api_server.go:88] waiting for apiserver healthz status ...
	I0229 18:59:07.043728   47515 api_server.go:253] Checking apiserver healthz at https://192.168.50.72:8443/healthz ...
	I0229 18:59:04.518296   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:07.014672   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:05.215563   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:05.715180   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:06.215014   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:06.715750   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:07.215911   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:07.715662   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:08.215895   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:08.715565   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:09.214999   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:09.215096   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:09.270645   47919 cri.go:89] found id: ""
	I0229 18:59:09.270672   47919 logs.go:276] 0 containers: []
	W0229 18:59:09.270683   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:09.270690   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:09.270748   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:09.335492   47919 cri.go:89] found id: ""
	I0229 18:59:09.335519   47919 logs.go:276] 0 containers: []
	W0229 18:59:09.335530   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:09.335546   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:09.335627   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:09.405117   47919 cri.go:89] found id: ""
	I0229 18:59:09.405150   47919 logs.go:276] 0 containers: []
	W0229 18:59:09.405160   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:09.405167   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:09.405233   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:09.451096   47919 cri.go:89] found id: ""
	I0229 18:59:09.451128   47919 logs.go:276] 0 containers: []
	W0229 18:59:09.451140   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:09.451147   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:09.451226   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:09.498951   47919 cri.go:89] found id: ""
	I0229 18:59:09.498981   47919 logs.go:276] 0 containers: []
	W0229 18:59:09.499007   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:09.499014   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:09.499091   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:09.544447   47919 cri.go:89] found id: ""
	I0229 18:59:09.544474   47919 logs.go:276] 0 containers: []
	W0229 18:59:09.544484   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:09.544491   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:09.544548   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:09.597735   47919 cri.go:89] found id: ""
	I0229 18:59:09.597764   47919 logs.go:276] 0 containers: []
	W0229 18:59:09.597775   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:09.597782   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:09.597836   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:09.648458   47919 cri.go:89] found id: ""
	I0229 18:59:09.648480   47919 logs.go:276] 0 containers: []
	W0229 18:59:09.648489   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:09.648499   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:09.648515   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:09.700744   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:09.700792   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:09.717303   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:09.717332   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:09.845966   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:09.845984   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:09.845995   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:09.913069   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:09.913106   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:05.583650   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:07.584155   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:09.584605   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:09.527960   47515 api_server.go:279] https://192.168.50.72:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 18:59:09.528037   47515 api_server.go:103] status: https://192.168.50.72:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 18:59:09.528059   47515 api_server.go:253] Checking apiserver healthz at https://192.168.50.72:8443/healthz ...
	I0229 18:59:09.571679   47515 api_server.go:279] https://192.168.50.72:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 18:59:09.571713   47515 api_server.go:103] status: https://192.168.50.72:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 18:59:09.571738   47515 api_server.go:253] Checking apiserver healthz at https://192.168.50.72:8443/healthz ...
	I0229 18:59:09.647733   47515 api_server.go:279] https://192.168.50.72:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 18:59:09.647780   47515 api_server.go:103] status: https://192.168.50.72:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 18:59:10.044646   47515 api_server.go:253] Checking apiserver healthz at https://192.168.50.72:8443/healthz ...
	I0229 18:59:10.049310   47515 api_server.go:279] https://192.168.50.72:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 18:59:10.049347   47515 api_server.go:103] status: https://192.168.50.72:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 18:59:10.543904   47515 api_server.go:253] Checking apiserver healthz at https://192.168.50.72:8443/healthz ...
	I0229 18:59:10.551014   47515 api_server.go:279] https://192.168.50.72:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 18:59:10.551055   47515 api_server.go:103] status: https://192.168.50.72:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 18:59:11.044658   47515 api_server.go:253] Checking apiserver healthz at https://192.168.50.72:8443/healthz ...
	I0229 18:59:11.051170   47515 api_server.go:279] https://192.168.50.72:8443/healthz returned 200:
	ok
	I0229 18:59:11.059048   47515 api_server.go:141] control plane version: v1.29.0-rc.2
	I0229 18:59:11.059076   47515 api_server.go:131] duration metric: took 4.015363545s to wait for apiserver health ...
	I0229 18:59:11.059085   47515 cni.go:84] Creating CNI manager for ""
	I0229 18:59:11.059092   47515 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 18:59:11.060915   47515 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 18:59:11.062158   47515 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 18:59:11.076961   47515 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 18:59:11.109344   47515 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 18:59:11.129625   47515 system_pods.go:59] 8 kube-system pods found
	I0229 18:59:11.129659   47515 system_pods.go:61] "coredns-76f75df574-dfrds" [ab7ce7e3-0532-48a1-9177-00e554d7e5af] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 18:59:11.129668   47515 system_pods.go:61] "etcd-no-preload-247197" [e37e6d4c-5039-484e-98af-553ade3ba60f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0229 18:59:11.129679   47515 system_pods.go:61] "kube-apiserver-no-preload-247197" [933648a9-115f-4c2a-b699-48ef7409331c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0229 18:59:11.129692   47515 system_pods.go:61] "kube-controller-manager-no-preload-247197" [b87a4a06-8a47-4cdf-a5e7-85f967e6332a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0229 18:59:11.129699   47515 system_pods.go:61] "kube-proxy-hjm9j" [a2e6ec66-78d9-4637-bb47-3f954969813b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0229 18:59:11.129707   47515 system_pods.go:61] "kube-scheduler-no-preload-247197" [cc52dc2c-cbe0-4cf0-8a2d-99a6f1880f6d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0229 18:59:11.129717   47515 system_pods.go:61] "metrics-server-57f55c9bc5-ggf8f" [dd2986a2-20a9-499c-805a-3c28819ff2f7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 18:59:11.129726   47515 system_pods.go:61] "storage-provisioner" [22f64d5e-b947-43ed-9842-cb6e252fd4a0] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0229 18:59:11.129733   47515 system_pods.go:74] duration metric: took 20.366108ms to wait for pod list to return data ...
	I0229 18:59:11.129742   47515 node_conditions.go:102] verifying NodePressure condition ...
	I0229 18:59:11.133259   47515 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 18:59:11.133282   47515 node_conditions.go:123] node cpu capacity is 2
	I0229 18:59:11.133294   47515 node_conditions.go:105] duration metric: took 3.545943ms to run NodePressure ...
	I0229 18:59:11.133313   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:59:11.618536   47515 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0229 18:59:11.625628   47515 kubeadm.go:787] kubelet initialised
	I0229 18:59:11.625649   47515 kubeadm.go:788] duration metric: took 7.089584ms waiting for restarted kubelet to initialise ...
	I0229 18:59:11.625661   47515 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 18:59:11.641122   47515 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-dfrds" in "kube-system" namespace to be "Ready" ...
	I0229 18:59:09.515059   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:11.515286   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:14.013214   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:12.465591   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:12.479774   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:12.479825   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:12.517591   47919 cri.go:89] found id: ""
	I0229 18:59:12.517620   47919 logs.go:276] 0 containers: []
	W0229 18:59:12.517630   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:12.517637   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:12.517693   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:12.560735   47919 cri.go:89] found id: ""
	I0229 18:59:12.560758   47919 logs.go:276] 0 containers: []
	W0229 18:59:12.560769   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:12.560776   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:12.560843   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:12.600002   47919 cri.go:89] found id: ""
	I0229 18:59:12.600025   47919 logs.go:276] 0 containers: []
	W0229 18:59:12.600033   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:12.600043   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:12.600088   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:12.639223   47919 cri.go:89] found id: ""
	I0229 18:59:12.639252   47919 logs.go:276] 0 containers: []
	W0229 18:59:12.639264   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:12.639272   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:12.639339   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:12.682491   47919 cri.go:89] found id: ""
	I0229 18:59:12.682514   47919 logs.go:276] 0 containers: []
	W0229 18:59:12.682524   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:12.682531   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:12.682590   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:12.720669   47919 cri.go:89] found id: ""
	I0229 18:59:12.720693   47919 logs.go:276] 0 containers: []
	W0229 18:59:12.720700   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:12.720706   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:12.720773   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:12.764880   47919 cri.go:89] found id: ""
	I0229 18:59:12.764908   47919 logs.go:276] 0 containers: []
	W0229 18:59:12.764919   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:12.764926   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:12.765011   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:12.808987   47919 cri.go:89] found id: ""
	I0229 18:59:12.809019   47919 logs.go:276] 0 containers: []
	W0229 18:59:12.809052   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:12.809064   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:12.809079   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:12.866228   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:12.866263   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:12.886698   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:12.886729   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:12.963092   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:12.963116   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:12.963129   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:13.034485   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:13.034524   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:11.586793   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:14.081742   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:13.648688   47515 pod_ready.go:102] pod "coredns-76f75df574-dfrds" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:15.648876   47515 pod_ready.go:102] pod "coredns-76f75df574-dfrds" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:17.649478   47515 pod_ready.go:102] pod "coredns-76f75df574-dfrds" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:16.015395   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:18.015919   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:15.588224   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:15.603293   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:15.603368   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:15.648746   47919 cri.go:89] found id: ""
	I0229 18:59:15.648771   47919 logs.go:276] 0 containers: []
	W0229 18:59:15.648781   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:15.648788   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:15.648850   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:15.686420   47919 cri.go:89] found id: ""
	I0229 18:59:15.686447   47919 logs.go:276] 0 containers: []
	W0229 18:59:15.686463   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:15.686470   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:15.686533   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:15.729410   47919 cri.go:89] found id: ""
	I0229 18:59:15.729439   47919 logs.go:276] 0 containers: []
	W0229 18:59:15.729451   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:15.729458   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:15.729526   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:15.768078   47919 cri.go:89] found id: ""
	I0229 18:59:15.768108   47919 logs.go:276] 0 containers: []
	W0229 18:59:15.768119   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:15.768127   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:15.768188   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:15.806725   47919 cri.go:89] found id: ""
	I0229 18:59:15.806753   47919 logs.go:276] 0 containers: []
	W0229 18:59:15.806765   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:15.806772   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:15.806845   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:15.848566   47919 cri.go:89] found id: ""
	I0229 18:59:15.848593   47919 logs.go:276] 0 containers: []
	W0229 18:59:15.848600   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:15.848606   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:15.848657   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:15.888907   47919 cri.go:89] found id: ""
	I0229 18:59:15.888930   47919 logs.go:276] 0 containers: []
	W0229 18:59:15.888942   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:15.888948   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:15.889009   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:15.926653   47919 cri.go:89] found id: ""
	I0229 18:59:15.926686   47919 logs.go:276] 0 containers: []
	W0229 18:59:15.926708   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:15.926729   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:15.926746   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:15.976773   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:15.976812   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:15.995440   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:15.995477   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:16.103753   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:16.103774   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:16.103786   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:16.188282   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:16.188319   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:18.733451   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:18.748528   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:18.748607   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:18.785998   47919 cri.go:89] found id: ""
	I0229 18:59:18.786055   47919 logs.go:276] 0 containers: []
	W0229 18:59:18.786069   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:18.786078   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:18.786144   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:18.824234   47919 cri.go:89] found id: ""
	I0229 18:59:18.824260   47919 logs.go:276] 0 containers: []
	W0229 18:59:18.824270   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:18.824277   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:18.824339   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:18.868586   47919 cri.go:89] found id: ""
	I0229 18:59:18.868615   47919 logs.go:276] 0 containers: []
	W0229 18:59:18.868626   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:18.868633   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:18.868696   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:18.912622   47919 cri.go:89] found id: ""
	I0229 18:59:18.912647   47919 logs.go:276] 0 containers: []
	W0229 18:59:18.912655   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:18.912661   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:18.912708   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:18.952001   47919 cri.go:89] found id: ""
	I0229 18:59:18.952029   47919 logs.go:276] 0 containers: []
	W0229 18:59:18.952040   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:18.952047   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:18.952108   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:18.993085   47919 cri.go:89] found id: ""
	I0229 18:59:18.993130   47919 logs.go:276] 0 containers: []
	W0229 18:59:18.993140   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:18.993148   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:18.993209   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:19.041498   47919 cri.go:89] found id: ""
	I0229 18:59:19.041524   47919 logs.go:276] 0 containers: []
	W0229 18:59:19.041536   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:19.041543   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:19.041601   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:19.099107   47919 cri.go:89] found id: ""
	I0229 18:59:19.099132   47919 logs.go:276] 0 containers: []
	W0229 18:59:19.099143   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:19.099153   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:19.099169   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:19.158578   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:19.158615   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:19.173561   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:19.173590   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:19.248498   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:19.248524   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:19.248540   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:19.326904   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:19.326939   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:16.085349   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:18.582796   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:20.148468   47515 pod_ready.go:102] pod "coredns-76f75df574-dfrds" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:21.648188   47515 pod_ready.go:92] pod "coredns-76f75df574-dfrds" in "kube-system" namespace has status "Ready":"True"
	I0229 18:59:21.648214   47515 pod_ready.go:81] duration metric: took 10.0070638s waiting for pod "coredns-76f75df574-dfrds" in "kube-system" namespace to be "Ready" ...
	I0229 18:59:21.648225   47515 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-247197" in "kube-system" namespace to be "Ready" ...
	I0229 18:59:20.514234   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:22.514669   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:21.877087   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:21.892919   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:21.892976   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:21.931119   47919 cri.go:89] found id: ""
	I0229 18:59:21.931147   47919 logs.go:276] 0 containers: []
	W0229 18:59:21.931159   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:21.931167   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:21.931227   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:21.971884   47919 cri.go:89] found id: ""
	I0229 18:59:21.971908   47919 logs.go:276] 0 containers: []
	W0229 18:59:21.971916   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:21.971921   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:21.971975   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:22.019170   47919 cri.go:89] found id: ""
	I0229 18:59:22.019206   47919 logs.go:276] 0 containers: []
	W0229 18:59:22.019216   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:22.019232   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:22.019311   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:22.078057   47919 cri.go:89] found id: ""
	I0229 18:59:22.078083   47919 logs.go:276] 0 containers: []
	W0229 18:59:22.078093   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:22.078100   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:22.078162   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:22.128112   47919 cri.go:89] found id: ""
	I0229 18:59:22.128141   47919 logs.go:276] 0 containers: []
	W0229 18:59:22.128151   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:22.128157   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:22.128214   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:22.171354   47919 cri.go:89] found id: ""
	I0229 18:59:22.171382   47919 logs.go:276] 0 containers: []
	W0229 18:59:22.171393   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:22.171400   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:22.171466   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:22.225620   47919 cri.go:89] found id: ""
	I0229 18:59:22.225642   47919 logs.go:276] 0 containers: []
	W0229 18:59:22.225651   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:22.225658   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:22.225718   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:22.271291   47919 cri.go:89] found id: ""
	I0229 18:59:22.271320   47919 logs.go:276] 0 containers: []
	W0229 18:59:22.271332   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:22.271343   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:22.271358   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:22.336735   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:22.336765   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:22.354397   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:22.354425   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:22.432691   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:22.432713   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:22.432727   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:22.520239   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:22.520268   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:20.587039   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:23.084979   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:25.086225   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:23.657675   47515 pod_ready.go:102] pod "etcd-no-preload-247197" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:25.656013   47515 pod_ready.go:92] pod "etcd-no-preload-247197" in "kube-system" namespace has status "Ready":"True"
	I0229 18:59:25.656050   47515 pod_ready.go:81] duration metric: took 4.007810687s waiting for pod "etcd-no-preload-247197" in "kube-system" namespace to be "Ready" ...
	I0229 18:59:25.656064   47515 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-247197" in "kube-system" namespace to be "Ready" ...
	I0229 18:59:25.661235   47515 pod_ready.go:92] pod "kube-apiserver-no-preload-247197" in "kube-system" namespace has status "Ready":"True"
	I0229 18:59:25.661263   47515 pod_ready.go:81] duration metric: took 5.191999ms waiting for pod "kube-apiserver-no-preload-247197" in "kube-system" namespace to be "Ready" ...
	I0229 18:59:25.661273   47515 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-247197" in "kube-system" namespace to be "Ready" ...
	I0229 18:59:25.666649   47515 pod_ready.go:92] pod "kube-controller-manager-no-preload-247197" in "kube-system" namespace has status "Ready":"True"
	I0229 18:59:25.666672   47515 pod_ready.go:81] duration metric: took 5.388774ms waiting for pod "kube-controller-manager-no-preload-247197" in "kube-system" namespace to be "Ready" ...
	I0229 18:59:25.666680   47515 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-hjm9j" in "kube-system" namespace to be "Ready" ...
	I0229 18:59:25.672042   47515 pod_ready.go:92] pod "kube-proxy-hjm9j" in "kube-system" namespace has status "Ready":"True"
	I0229 18:59:25.672067   47515 pod_ready.go:81] duration metric: took 5.380771ms waiting for pod "kube-proxy-hjm9j" in "kube-system" namespace to be "Ready" ...
	I0229 18:59:25.672076   47515 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-247197" in "kube-system" namespace to be "Ready" ...
	I0229 18:59:25.676980   47515 pod_ready.go:92] pod "kube-scheduler-no-preload-247197" in "kube-system" namespace has status "Ready":"True"
	I0229 18:59:25.677001   47515 pod_ready.go:81] duration metric: took 4.919332ms waiting for pod "kube-scheduler-no-preload-247197" in "kube-system" namespace to be "Ready" ...
	I0229 18:59:25.677013   47515 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace to be "Ready" ...
	I0229 18:59:27.684865   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:25.017772   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:27.513975   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:25.073478   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:25.105197   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:25.105262   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:25.165700   47919 cri.go:89] found id: ""
	I0229 18:59:25.165728   47919 logs.go:276] 0 containers: []
	W0229 18:59:25.165737   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:25.165744   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:25.165810   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:25.210864   47919 cri.go:89] found id: ""
	I0229 18:59:25.210892   47919 logs.go:276] 0 containers: []
	W0229 18:59:25.210904   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:25.210911   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:25.210974   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:25.257785   47919 cri.go:89] found id: ""
	I0229 18:59:25.257810   47919 logs.go:276] 0 containers: []
	W0229 18:59:25.257820   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:25.257827   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:25.257888   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:25.299816   47919 cri.go:89] found id: ""
	I0229 18:59:25.299844   47919 logs.go:276] 0 containers: []
	W0229 18:59:25.299855   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:25.299863   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:25.299933   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:25.339711   47919 cri.go:89] found id: ""
	I0229 18:59:25.339737   47919 logs.go:276] 0 containers: []
	W0229 18:59:25.339746   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:25.339751   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:25.339805   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:25.381107   47919 cri.go:89] found id: ""
	I0229 18:59:25.381135   47919 logs.go:276] 0 containers: []
	W0229 18:59:25.381145   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:25.381153   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:25.381211   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:25.429029   47919 cri.go:89] found id: ""
	I0229 18:59:25.429054   47919 logs.go:276] 0 containers: []
	W0229 18:59:25.429064   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:25.429071   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:25.429130   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:25.470598   47919 cri.go:89] found id: ""
	I0229 18:59:25.470629   47919 logs.go:276] 0 containers: []
	W0229 18:59:25.470637   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:25.470644   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:25.470655   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:25.516439   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:25.516476   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:25.569170   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:25.569204   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:25.584405   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:25.584430   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:25.663650   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:25.663671   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:25.663686   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:28.248036   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:28.263367   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:28.263440   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:28.302232   47919 cri.go:89] found id: ""
	I0229 18:59:28.302259   47919 logs.go:276] 0 containers: []
	W0229 18:59:28.302273   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:28.302281   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:28.302340   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:28.345147   47919 cri.go:89] found id: ""
	I0229 18:59:28.345173   47919 logs.go:276] 0 containers: []
	W0229 18:59:28.345185   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:28.345192   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:28.345250   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:28.383671   47919 cri.go:89] found id: ""
	I0229 18:59:28.383690   47919 logs.go:276] 0 containers: []
	W0229 18:59:28.383702   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:28.383709   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:28.383762   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:28.423737   47919 cri.go:89] found id: ""
	I0229 18:59:28.423762   47919 logs.go:276] 0 containers: []
	W0229 18:59:28.423769   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:28.423774   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:28.423826   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:28.465679   47919 cri.go:89] found id: ""
	I0229 18:59:28.465705   47919 logs.go:276] 0 containers: []
	W0229 18:59:28.465715   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:28.465723   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:28.465775   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:28.509703   47919 cri.go:89] found id: ""
	I0229 18:59:28.509731   47919 logs.go:276] 0 containers: []
	W0229 18:59:28.509742   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:28.509754   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:28.509826   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:28.549981   47919 cri.go:89] found id: ""
	I0229 18:59:28.550010   47919 logs.go:276] 0 containers: []
	W0229 18:59:28.550021   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:28.550027   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:28.550093   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:28.589802   47919 cri.go:89] found id: ""
	I0229 18:59:28.589827   47919 logs.go:276] 0 containers: []
	W0229 18:59:28.589834   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:28.589841   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:28.589853   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:28.670623   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:28.670644   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:28.670655   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:28.765451   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:28.765484   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:28.821538   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:28.821571   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:28.889401   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:28.889438   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:27.583470   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:29.584344   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:30.184242   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:32.184867   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:29.514804   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:31.516473   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:34.013518   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:31.406911   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:31.422464   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:31.422541   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:31.460701   47919 cri.go:89] found id: ""
	I0229 18:59:31.460744   47919 logs.go:276] 0 containers: []
	W0229 18:59:31.460755   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:31.460762   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:31.460822   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:31.506966   47919 cri.go:89] found id: ""
	I0229 18:59:31.506996   47919 logs.go:276] 0 containers: []
	W0229 18:59:31.507007   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:31.507013   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:31.507088   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:31.542582   47919 cri.go:89] found id: ""
	I0229 18:59:31.542611   47919 logs.go:276] 0 containers: []
	W0229 18:59:31.542623   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:31.542631   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:31.542693   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:31.585470   47919 cri.go:89] found id: ""
	I0229 18:59:31.585496   47919 logs.go:276] 0 containers: []
	W0229 18:59:31.585508   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:31.585516   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:31.585574   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:31.627751   47919 cri.go:89] found id: ""
	I0229 18:59:31.627785   47919 logs.go:276] 0 containers: []
	W0229 18:59:31.627797   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:31.627805   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:31.627864   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:31.665988   47919 cri.go:89] found id: ""
	I0229 18:59:31.666009   47919 logs.go:276] 0 containers: []
	W0229 18:59:31.666017   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:31.666023   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:31.666081   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:31.712553   47919 cri.go:89] found id: ""
	I0229 18:59:31.712583   47919 logs.go:276] 0 containers: []
	W0229 18:59:31.712597   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:31.712603   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:31.712659   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:31.749904   47919 cri.go:89] found id: ""
	I0229 18:59:31.749944   47919 logs.go:276] 0 containers: []
	W0229 18:59:31.749954   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:31.749965   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:31.749980   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:31.843949   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:31.843992   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:31.898158   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:31.898186   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:31.949798   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:31.949831   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:31.965666   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:31.965697   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:32.040368   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:34.541417   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:34.558286   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:34.558345   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:34.602083   47919 cri.go:89] found id: ""
	I0229 18:59:34.602113   47919 logs.go:276] 0 containers: []
	W0229 18:59:34.602123   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:34.602130   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:34.602200   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:34.647108   47919 cri.go:89] found id: ""
	I0229 18:59:34.647136   47919 logs.go:276] 0 containers: []
	W0229 18:59:34.647146   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:34.647151   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:34.647220   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:34.692920   47919 cri.go:89] found id: ""
	I0229 18:59:34.692942   47919 logs.go:276] 0 containers: []
	W0229 18:59:34.692950   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:34.692956   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:34.693000   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:34.739367   47919 cri.go:89] found id: ""
	I0229 18:59:34.739397   47919 logs.go:276] 0 containers: []
	W0229 18:59:34.739408   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:34.739416   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:34.739478   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:34.794083   47919 cri.go:89] found id: ""
	I0229 18:59:34.794106   47919 logs.go:276] 0 containers: []
	W0229 18:59:34.794114   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:34.794120   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:34.794179   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:34.865371   47919 cri.go:89] found id: ""
	I0229 18:59:34.865400   47919 logs.go:276] 0 containers: []
	W0229 18:59:34.865412   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:34.865419   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:34.865476   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:34.906957   47919 cri.go:89] found id: ""
	I0229 18:59:34.906986   47919 logs.go:276] 0 containers: []
	W0229 18:59:34.906994   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:34.906999   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:34.907063   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:31.584743   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:34.085375   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:34.684397   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:37.183641   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:36.015759   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:38.514451   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:34.948548   47919 cri.go:89] found id: ""
	I0229 18:59:34.948570   47919 logs.go:276] 0 containers: []
	W0229 18:59:34.948577   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:34.948586   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:34.948598   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:35.036558   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:35.036594   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:35.080137   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:35.080169   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:35.130408   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:35.130436   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:35.148306   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:35.148332   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:35.222648   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:37.723158   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:37.741809   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:37.741885   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:37.787147   47919 cri.go:89] found id: ""
	I0229 18:59:37.787177   47919 logs.go:276] 0 containers: []
	W0229 18:59:37.787184   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:37.787192   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:37.787249   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:37.835589   47919 cri.go:89] found id: ""
	I0229 18:59:37.835613   47919 logs.go:276] 0 containers: []
	W0229 18:59:37.835623   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:37.835630   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:37.835687   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:37.895088   47919 cri.go:89] found id: ""
	I0229 18:59:37.895118   47919 logs.go:276] 0 containers: []
	W0229 18:59:37.895130   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:37.895137   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:37.895194   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:37.940837   47919 cri.go:89] found id: ""
	I0229 18:59:37.940867   47919 logs.go:276] 0 containers: []
	W0229 18:59:37.940878   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:37.940886   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:37.940946   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:37.989155   47919 cri.go:89] found id: ""
	I0229 18:59:37.989183   47919 logs.go:276] 0 containers: []
	W0229 18:59:37.989194   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:37.989203   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:37.989267   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:38.026517   47919 cri.go:89] found id: ""
	I0229 18:59:38.026543   47919 logs.go:276] 0 containers: []
	W0229 18:59:38.026553   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:38.026560   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:38.026623   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:38.063299   47919 cri.go:89] found id: ""
	I0229 18:59:38.063328   47919 logs.go:276] 0 containers: []
	W0229 18:59:38.063340   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:38.063347   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:38.063393   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:38.106278   47919 cri.go:89] found id: ""
	I0229 18:59:38.106298   47919 logs.go:276] 0 containers: []
	W0229 18:59:38.106305   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:38.106315   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:38.106330   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:38.182985   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:38.183008   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:38.183038   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:38.260280   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:38.260312   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:38.303648   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:38.303678   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:38.352889   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:38.352931   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:36.583258   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:38.583878   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:39.185221   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:41.684957   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:40.515303   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:43.017529   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:40.870416   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:40.885618   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:40.885692   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:40.924088   47919 cri.go:89] found id: ""
	I0229 18:59:40.924115   47919 logs.go:276] 0 containers: []
	W0229 18:59:40.924126   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:40.924133   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:40.924192   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:40.959485   47919 cri.go:89] found id: ""
	I0229 18:59:40.959513   47919 logs.go:276] 0 containers: []
	W0229 18:59:40.959524   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:40.959532   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:40.959593   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:41.009453   47919 cri.go:89] found id: ""
	I0229 18:59:41.009478   47919 logs.go:276] 0 containers: []
	W0229 18:59:41.009489   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:41.009496   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:41.009552   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:41.052894   47919 cri.go:89] found id: ""
	I0229 18:59:41.052922   47919 logs.go:276] 0 containers: []
	W0229 18:59:41.052933   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:41.052940   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:41.052997   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:41.098299   47919 cri.go:89] found id: ""
	I0229 18:59:41.098328   47919 logs.go:276] 0 containers: []
	W0229 18:59:41.098338   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:41.098345   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:41.098460   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:41.138287   47919 cri.go:89] found id: ""
	I0229 18:59:41.138313   47919 logs.go:276] 0 containers: []
	W0229 18:59:41.138324   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:41.138333   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:41.138395   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:41.176482   47919 cri.go:89] found id: ""
	I0229 18:59:41.176512   47919 logs.go:276] 0 containers: []
	W0229 18:59:41.176522   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:41.176529   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:41.176598   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:41.215284   47919 cri.go:89] found id: ""
	I0229 18:59:41.215307   47919 logs.go:276] 0 containers: []
	W0229 18:59:41.215317   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:41.215327   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:41.215342   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:41.230954   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:41.230982   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:41.313672   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:41.313696   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:41.313713   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:41.393574   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:41.393610   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:41.443384   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:41.443422   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:43.994323   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:44.008821   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:44.008892   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:44.050088   47919 cri.go:89] found id: ""
	I0229 18:59:44.050116   47919 logs.go:276] 0 containers: []
	W0229 18:59:44.050124   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:44.050130   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:44.050207   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:44.089721   47919 cri.go:89] found id: ""
	I0229 18:59:44.089749   47919 logs.go:276] 0 containers: []
	W0229 18:59:44.089756   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:44.089762   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:44.089818   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:44.132366   47919 cri.go:89] found id: ""
	I0229 18:59:44.132398   47919 logs.go:276] 0 containers: []
	W0229 18:59:44.132407   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:44.132412   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:44.132468   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:44.173568   47919 cri.go:89] found id: ""
	I0229 18:59:44.173591   47919 logs.go:276] 0 containers: []
	W0229 18:59:44.173598   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:44.173604   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:44.173661   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:44.214660   47919 cri.go:89] found id: ""
	I0229 18:59:44.214683   47919 logs.go:276] 0 containers: []
	W0229 18:59:44.214691   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:44.214696   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:44.214747   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:44.254355   47919 cri.go:89] found id: ""
	I0229 18:59:44.254386   47919 logs.go:276] 0 containers: []
	W0229 18:59:44.254397   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:44.254405   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:44.254464   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:44.293548   47919 cri.go:89] found id: ""
	I0229 18:59:44.293573   47919 logs.go:276] 0 containers: []
	W0229 18:59:44.293584   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:44.293591   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:44.293652   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:44.333335   47919 cri.go:89] found id: ""
	I0229 18:59:44.333361   47919 logs.go:276] 0 containers: []
	W0229 18:59:44.333372   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:44.333383   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:44.333398   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:44.348941   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:44.348973   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:44.419949   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:44.419968   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:44.419982   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:44.503445   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:44.503479   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:44.558694   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:44.558728   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:40.584127   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:43.084271   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:43.685573   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:46.184467   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:45.513896   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:47.514467   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:47.129362   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:47.145410   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:47.145483   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:47.194037   47919 cri.go:89] found id: ""
	I0229 18:59:47.194073   47919 logs.go:276] 0 containers: []
	W0229 18:59:47.194092   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:47.194100   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:47.194160   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:47.232500   47919 cri.go:89] found id: ""
	I0229 18:59:47.232528   47919 logs.go:276] 0 containers: []
	W0229 18:59:47.232559   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:47.232568   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:47.232634   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:47.271452   47919 cri.go:89] found id: ""
	I0229 18:59:47.271485   47919 logs.go:276] 0 containers: []
	W0229 18:59:47.271494   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:47.271501   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:47.271561   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:47.313482   47919 cri.go:89] found id: ""
	I0229 18:59:47.313509   47919 logs.go:276] 0 containers: []
	W0229 18:59:47.313520   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:47.313527   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:47.313590   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:47.354958   47919 cri.go:89] found id: ""
	I0229 18:59:47.354988   47919 logs.go:276] 0 containers: []
	W0229 18:59:47.354996   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:47.355001   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:47.355092   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:47.393312   47919 cri.go:89] found id: ""
	I0229 18:59:47.393338   47919 logs.go:276] 0 containers: []
	W0229 18:59:47.393349   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:47.393356   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:47.393415   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:47.431370   47919 cri.go:89] found id: ""
	I0229 18:59:47.431396   47919 logs.go:276] 0 containers: []
	W0229 18:59:47.431406   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:47.431413   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:47.431471   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:47.471659   47919 cri.go:89] found id: ""
	I0229 18:59:47.471683   47919 logs.go:276] 0 containers: []
	W0229 18:59:47.471692   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:47.471702   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:47.471715   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:47.530365   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:47.530405   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:47.558874   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:47.558903   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:47.644009   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:47.644033   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:47.644047   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:47.730063   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:47.730095   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:45.583524   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:47.585620   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:50.083189   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:48.684211   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:50.686885   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:49.514667   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:52.014092   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:50.272945   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:50.288718   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:50.288796   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:50.331460   47919 cri.go:89] found id: ""
	I0229 18:59:50.331482   47919 logs.go:276] 0 containers: []
	W0229 18:59:50.331489   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:50.331495   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:50.331543   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:50.374960   47919 cri.go:89] found id: ""
	I0229 18:59:50.374989   47919 logs.go:276] 0 containers: []
	W0229 18:59:50.375000   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:50.375006   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:50.375076   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:50.415073   47919 cri.go:89] found id: ""
	I0229 18:59:50.415095   47919 logs.go:276] 0 containers: []
	W0229 18:59:50.415102   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:50.415107   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:50.415157   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:50.452511   47919 cri.go:89] found id: ""
	I0229 18:59:50.452554   47919 logs.go:276] 0 containers: []
	W0229 18:59:50.452563   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:50.452568   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:50.452612   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:50.498103   47919 cri.go:89] found id: ""
	I0229 18:59:50.498125   47919 logs.go:276] 0 containers: []
	W0229 18:59:50.498132   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:50.498137   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:50.498193   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:50.545366   47919 cri.go:89] found id: ""
	I0229 18:59:50.545397   47919 logs.go:276] 0 containers: []
	W0229 18:59:50.545409   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:50.545417   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:50.545487   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:50.608215   47919 cri.go:89] found id: ""
	I0229 18:59:50.608239   47919 logs.go:276] 0 containers: []
	W0229 18:59:50.608250   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:50.608257   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:50.608314   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:50.660835   47919 cri.go:89] found id: ""
	I0229 18:59:50.660861   47919 logs.go:276] 0 containers: []
	W0229 18:59:50.660881   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:50.660892   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:50.660907   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:50.749671   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:50.749712   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:50.797567   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:50.797595   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:50.848022   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:50.848059   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:50.862797   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:50.862820   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:50.934682   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:53.435804   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:53.451364   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:53.451440   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:53.500680   47919 cri.go:89] found id: ""
	I0229 18:59:53.500706   47919 logs.go:276] 0 containers: []
	W0229 18:59:53.500717   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:53.500744   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:53.500797   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:53.565306   47919 cri.go:89] found id: ""
	I0229 18:59:53.565334   47919 logs.go:276] 0 containers: []
	W0229 18:59:53.565344   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:53.565351   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:53.565410   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:53.631438   47919 cri.go:89] found id: ""
	I0229 18:59:53.631461   47919 logs.go:276] 0 containers: []
	W0229 18:59:53.631479   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:53.631486   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:53.631554   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:53.679482   47919 cri.go:89] found id: ""
	I0229 18:59:53.679506   47919 logs.go:276] 0 containers: []
	W0229 18:59:53.679516   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:53.679524   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:53.679580   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:53.722098   47919 cri.go:89] found id: ""
	I0229 18:59:53.722125   47919 logs.go:276] 0 containers: []
	W0229 18:59:53.722135   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:53.722142   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:53.722211   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:53.761804   47919 cri.go:89] found id: ""
	I0229 18:59:53.761838   47919 logs.go:276] 0 containers: []
	W0229 18:59:53.761849   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:53.761858   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:53.761942   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:53.806109   47919 cri.go:89] found id: ""
	I0229 18:59:53.806137   47919 logs.go:276] 0 containers: []
	W0229 18:59:53.806149   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:53.806157   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:53.806219   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:53.856794   47919 cri.go:89] found id: ""
	I0229 18:59:53.856823   47919 logs.go:276] 0 containers: []
	W0229 18:59:53.856831   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:53.856839   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:53.856849   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:53.908216   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:53.908252   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:53.923999   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:53.924038   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:54.000750   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:54.000772   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:54.000783   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:54.086840   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:54.086870   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:52.083751   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:54.586556   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:53.184426   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:55.683893   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:57.685129   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:54.513193   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:56.515925   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:59.013745   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:56.630728   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:56.647368   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:56.647440   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:56.693706   47919 cri.go:89] found id: ""
	I0229 18:59:56.693726   47919 logs.go:276] 0 containers: []
	W0229 18:59:56.693733   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:56.693738   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:56.693780   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:56.733377   47919 cri.go:89] found id: ""
	I0229 18:59:56.733404   47919 logs.go:276] 0 containers: []
	W0229 18:59:56.733415   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:56.733423   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:56.733491   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:56.772186   47919 cri.go:89] found id: ""
	I0229 18:59:56.772209   47919 logs.go:276] 0 containers: []
	W0229 18:59:56.772216   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:56.772222   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:56.772267   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:56.811919   47919 cri.go:89] found id: ""
	I0229 18:59:56.811964   47919 logs.go:276] 0 containers: []
	W0229 18:59:56.811977   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:56.811984   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:56.812035   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:56.849345   47919 cri.go:89] found id: ""
	I0229 18:59:56.849372   47919 logs.go:276] 0 containers: []
	W0229 18:59:56.849383   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:56.849390   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:56.849447   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:56.900091   47919 cri.go:89] found id: ""
	I0229 18:59:56.900119   47919 logs.go:276] 0 containers: []
	W0229 18:59:56.900129   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:56.900136   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:56.900193   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:56.937662   47919 cri.go:89] found id: ""
	I0229 18:59:56.937692   47919 logs.go:276] 0 containers: []
	W0229 18:59:56.937703   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:56.937710   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:56.937772   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:56.978195   47919 cri.go:89] found id: ""
	I0229 18:59:56.978224   47919 logs.go:276] 0 containers: []
	W0229 18:59:56.978234   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:56.978244   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:56.978259   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:57.059190   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:57.059223   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:57.101416   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:57.101442   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:57.156102   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:57.156140   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:57.171401   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:57.171435   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:57.243717   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:59.744588   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:59.760099   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:59.760175   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:59.798722   47919 cri.go:89] found id: ""
	I0229 18:59:59.798751   47919 logs.go:276] 0 containers: []
	W0229 18:59:59.798762   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:59.798770   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:59.798830   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:59.842423   47919 cri.go:89] found id: ""
	I0229 18:59:59.842452   47919 logs.go:276] 0 containers: []
	W0229 18:59:59.842463   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:59.842470   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:59.842532   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:59.883742   47919 cri.go:89] found id: ""
	I0229 18:59:59.883768   47919 logs.go:276] 0 containers: []
	W0229 18:59:59.883775   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:59.883781   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:59.883826   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:59.924062   47919 cri.go:89] found id: ""
	I0229 18:59:59.924091   47919 logs.go:276] 0 containers: []
	W0229 18:59:59.924102   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:59.924109   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:59.924166   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:56.587621   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:59.087882   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:59.685911   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:02.185406   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:01.014202   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:03.014972   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:59.962465   47919 cri.go:89] found id: ""
	I0229 18:59:59.962497   47919 logs.go:276] 0 containers: []
	W0229 18:59:59.962508   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:59.962515   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:59.962576   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:00.006069   47919 cri.go:89] found id: ""
	I0229 19:00:00.006103   47919 logs.go:276] 0 containers: []
	W0229 19:00:00.006114   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:00.006123   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:00.006185   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:00.047671   47919 cri.go:89] found id: ""
	I0229 19:00:00.047697   47919 logs.go:276] 0 containers: []
	W0229 19:00:00.047709   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:00.047715   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:00.047773   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:00.091452   47919 cri.go:89] found id: ""
	I0229 19:00:00.091475   47919 logs.go:276] 0 containers: []
	W0229 19:00:00.091486   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:00.091497   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:00.091511   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:00.143282   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:00.143313   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:00.158342   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:00.158366   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:00.239745   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:00.239774   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:00.239792   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:00.339048   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:00.339083   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:02.898414   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:02.914154   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:02.914221   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:02.956122   47919 cri.go:89] found id: ""
	I0229 19:00:02.956151   47919 logs.go:276] 0 containers: []
	W0229 19:00:02.956211   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:02.956225   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:02.956272   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:02.993609   47919 cri.go:89] found id: ""
	I0229 19:00:02.993636   47919 logs.go:276] 0 containers: []
	W0229 19:00:02.993646   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:02.993659   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:02.993720   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:03.038131   47919 cri.go:89] found id: ""
	I0229 19:00:03.038152   47919 logs.go:276] 0 containers: []
	W0229 19:00:03.038160   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:03.038165   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:03.038217   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:03.090845   47919 cri.go:89] found id: ""
	I0229 19:00:03.090866   47919 logs.go:276] 0 containers: []
	W0229 19:00:03.090873   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:03.090878   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:03.090935   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:03.129520   47919 cri.go:89] found id: ""
	I0229 19:00:03.129549   47919 logs.go:276] 0 containers: []
	W0229 19:00:03.129561   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:03.129568   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:03.129620   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:03.178528   47919 cri.go:89] found id: ""
	I0229 19:00:03.178557   47919 logs.go:276] 0 containers: []
	W0229 19:00:03.178567   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:03.178575   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:03.178631   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:03.218337   47919 cri.go:89] found id: ""
	I0229 19:00:03.218357   47919 logs.go:276] 0 containers: []
	W0229 19:00:03.218364   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:03.218369   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:03.218417   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:03.267682   47919 cri.go:89] found id: ""
	I0229 19:00:03.267713   47919 logs.go:276] 0 containers: []
	W0229 19:00:03.267726   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:03.267735   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:03.267753   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:03.286961   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:03.286987   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:03.376514   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:03.376535   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:03.376546   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:03.459824   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:03.459872   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:03.505821   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:03.505848   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:01.582954   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:03.583198   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:04.684892   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:06.685508   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:05.015836   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:07.514376   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:06.062525   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:06.077637   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:06.077708   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:06.119344   47919 cri.go:89] found id: ""
	I0229 19:00:06.119368   47919 logs.go:276] 0 containers: []
	W0229 19:00:06.119376   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:06.119381   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:06.119430   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:06.158209   47919 cri.go:89] found id: ""
	I0229 19:00:06.158232   47919 logs.go:276] 0 containers: []
	W0229 19:00:06.158239   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:06.158245   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:06.158291   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:06.198521   47919 cri.go:89] found id: ""
	I0229 19:00:06.198545   47919 logs.go:276] 0 containers: []
	W0229 19:00:06.198553   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:06.198559   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:06.198609   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:06.235872   47919 cri.go:89] found id: ""
	I0229 19:00:06.235919   47919 logs.go:276] 0 containers: []
	W0229 19:00:06.235930   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:06.235937   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:06.235998   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:06.282814   47919 cri.go:89] found id: ""
	I0229 19:00:06.282841   47919 logs.go:276] 0 containers: []
	W0229 19:00:06.282853   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:06.282860   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:06.282928   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:06.330549   47919 cri.go:89] found id: ""
	I0229 19:00:06.330572   47919 logs.go:276] 0 containers: []
	W0229 19:00:06.330580   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:06.330585   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:06.330632   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:06.399968   47919 cri.go:89] found id: ""
	I0229 19:00:06.399996   47919 logs.go:276] 0 containers: []
	W0229 19:00:06.400006   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:06.400012   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:06.400062   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:06.444899   47919 cri.go:89] found id: ""
	I0229 19:00:06.444921   47919 logs.go:276] 0 containers: []
	W0229 19:00:06.444929   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:06.444937   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:06.444950   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:06.460552   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:06.460580   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:06.532932   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:06.532956   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:06.532969   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:06.615130   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:06.615170   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:06.664499   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:06.664532   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:09.219226   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:09.236769   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:09.236829   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:09.292309   47919 cri.go:89] found id: ""
	I0229 19:00:09.292331   47919 logs.go:276] 0 containers: []
	W0229 19:00:09.292339   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:09.292345   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:09.292392   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:09.355237   47919 cri.go:89] found id: ""
	I0229 19:00:09.355259   47919 logs.go:276] 0 containers: []
	W0229 19:00:09.355267   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:09.355272   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:09.355319   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:09.397950   47919 cri.go:89] found id: ""
	I0229 19:00:09.397977   47919 logs.go:276] 0 containers: []
	W0229 19:00:09.397987   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:09.397995   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:09.398057   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:09.436751   47919 cri.go:89] found id: ""
	I0229 19:00:09.436779   47919 logs.go:276] 0 containers: []
	W0229 19:00:09.436789   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:09.436797   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:09.436862   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:09.480288   47919 cri.go:89] found id: ""
	I0229 19:00:09.480311   47919 logs.go:276] 0 containers: []
	W0229 19:00:09.480318   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:09.480324   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:09.480375   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:09.523576   47919 cri.go:89] found id: ""
	I0229 19:00:09.523599   47919 logs.go:276] 0 containers: []
	W0229 19:00:09.523606   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:09.523611   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:09.523658   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:09.562818   47919 cri.go:89] found id: ""
	I0229 19:00:09.562848   47919 logs.go:276] 0 containers: []
	W0229 19:00:09.562859   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:09.562872   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:09.562919   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:09.603331   47919 cri.go:89] found id: ""
	I0229 19:00:09.603357   47919 logs.go:276] 0 containers: []
	W0229 19:00:09.603369   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:09.603379   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:09.603393   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:09.652060   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:09.652089   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:09.668372   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:09.668394   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:09.745897   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:09.745923   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:09.745937   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:09.826981   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:09.827014   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:05.590288   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:08.083411   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:10.084324   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:09.184577   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:11.185922   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:10.015288   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:12.513820   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:12.371447   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:12.385523   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:12.385613   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:12.422038   47919 cri.go:89] found id: ""
	I0229 19:00:12.422067   47919 logs.go:276] 0 containers: []
	W0229 19:00:12.422077   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:12.422084   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:12.422155   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:12.460443   47919 cri.go:89] found id: ""
	I0229 19:00:12.460470   47919 logs.go:276] 0 containers: []
	W0229 19:00:12.460487   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:12.460495   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:12.460551   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:12.502791   47919 cri.go:89] found id: ""
	I0229 19:00:12.502820   47919 logs.go:276] 0 containers: []
	W0229 19:00:12.502830   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:12.502838   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:12.502897   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:12.540738   47919 cri.go:89] found id: ""
	I0229 19:00:12.540769   47919 logs.go:276] 0 containers: []
	W0229 19:00:12.540780   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:12.540786   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:12.540845   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:12.580041   47919 cri.go:89] found id: ""
	I0229 19:00:12.580072   47919 logs.go:276] 0 containers: []
	W0229 19:00:12.580084   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:12.580091   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:12.580151   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:12.620721   47919 cri.go:89] found id: ""
	I0229 19:00:12.620750   47919 logs.go:276] 0 containers: []
	W0229 19:00:12.620758   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:12.620763   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:12.620820   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:12.659877   47919 cri.go:89] found id: ""
	I0229 19:00:12.659906   47919 logs.go:276] 0 containers: []
	W0229 19:00:12.659917   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:12.659925   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:12.659975   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:12.699133   47919 cri.go:89] found id: ""
	I0229 19:00:12.699160   47919 logs.go:276] 0 containers: []
	W0229 19:00:12.699170   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:12.699177   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:12.699188   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:12.742164   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:12.742189   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:12.792215   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:12.792248   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:12.808322   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:12.808344   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:12.879089   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:12.879114   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:12.879129   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:12.586572   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:15.083323   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:13.687899   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:16.184671   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:14.521430   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:17.013799   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:19.014661   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:15.466778   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:15.480875   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:15.480945   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:15.525331   47919 cri.go:89] found id: ""
	I0229 19:00:15.525353   47919 logs.go:276] 0 containers: []
	W0229 19:00:15.525360   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:15.525366   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:15.525422   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:15.567787   47919 cri.go:89] found id: ""
	I0229 19:00:15.567819   47919 logs.go:276] 0 containers: []
	W0229 19:00:15.567831   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:15.567838   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:15.567923   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:15.609440   47919 cri.go:89] found id: ""
	I0229 19:00:15.609467   47919 logs.go:276] 0 containers: []
	W0229 19:00:15.609477   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:15.609484   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:15.609559   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:15.650113   47919 cri.go:89] found id: ""
	I0229 19:00:15.650142   47919 logs.go:276] 0 containers: []
	W0229 19:00:15.650153   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:15.650161   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:15.650223   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:15.691499   47919 cri.go:89] found id: ""
	I0229 19:00:15.691527   47919 logs.go:276] 0 containers: []
	W0229 19:00:15.691537   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:15.691544   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:15.691603   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:15.731199   47919 cri.go:89] found id: ""
	I0229 19:00:15.731227   47919 logs.go:276] 0 containers: []
	W0229 19:00:15.731239   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:15.731246   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:15.731324   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:15.772997   47919 cri.go:89] found id: ""
	I0229 19:00:15.773019   47919 logs.go:276] 0 containers: []
	W0229 19:00:15.773027   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:15.773032   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:15.773091   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:15.811223   47919 cri.go:89] found id: ""
	I0229 19:00:15.811244   47919 logs.go:276] 0 containers: []
	W0229 19:00:15.811252   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:15.811271   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:15.811283   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:15.862159   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:15.862196   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:15.877436   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:15.877460   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:15.948486   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:15.948513   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:15.948525   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:16.030585   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:16.030617   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:18.592020   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:18.607286   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:18.607368   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:18.647886   47919 cri.go:89] found id: ""
	I0229 19:00:18.647913   47919 logs.go:276] 0 containers: []
	W0229 19:00:18.647924   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:18.647951   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:18.648007   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:18.687394   47919 cri.go:89] found id: ""
	I0229 19:00:18.687420   47919 logs.go:276] 0 containers: []
	W0229 19:00:18.687430   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:18.687436   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:18.687491   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:18.734159   47919 cri.go:89] found id: ""
	I0229 19:00:18.734187   47919 logs.go:276] 0 containers: []
	W0229 19:00:18.734198   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:18.734205   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:18.734262   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:18.782950   47919 cri.go:89] found id: ""
	I0229 19:00:18.782989   47919 logs.go:276] 0 containers: []
	W0229 19:00:18.783000   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:18.783008   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:18.783089   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:18.818695   47919 cri.go:89] found id: ""
	I0229 19:00:18.818723   47919 logs.go:276] 0 containers: []
	W0229 19:00:18.818734   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:18.818742   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:18.818805   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:18.859479   47919 cri.go:89] found id: ""
	I0229 19:00:18.859504   47919 logs.go:276] 0 containers: []
	W0229 19:00:18.859515   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:18.859522   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:18.859580   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:18.902897   47919 cri.go:89] found id: ""
	I0229 19:00:18.902923   47919 logs.go:276] 0 containers: []
	W0229 19:00:18.902934   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:18.902942   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:18.903002   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:18.947708   47919 cri.go:89] found id: ""
	I0229 19:00:18.947731   47919 logs.go:276] 0 containers: []
	W0229 19:00:18.947742   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:18.947752   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:18.947772   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:19.025069   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:19.025092   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:19.025107   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:19.115589   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:19.115626   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:19.164930   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:19.164960   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:19.217497   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:19.217531   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:17.584961   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:20.081558   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:18.685924   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:21.184830   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:21.015314   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:23.513573   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:21.733516   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:21.748586   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:21.748648   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:21.788383   47919 cri.go:89] found id: ""
	I0229 19:00:21.788409   47919 logs.go:276] 0 containers: []
	W0229 19:00:21.788420   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:21.788429   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:21.788487   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:21.827147   47919 cri.go:89] found id: ""
	I0229 19:00:21.827176   47919 logs.go:276] 0 containers: []
	W0229 19:00:21.827187   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:21.827194   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:21.827255   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:21.867525   47919 cri.go:89] found id: ""
	I0229 19:00:21.867552   47919 logs.go:276] 0 containers: []
	W0229 19:00:21.867561   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:21.867570   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:21.867618   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:21.911542   47919 cri.go:89] found id: ""
	I0229 19:00:21.911564   47919 logs.go:276] 0 containers: []
	W0229 19:00:21.911573   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:21.911578   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:21.911629   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:21.949779   47919 cri.go:89] found id: ""
	I0229 19:00:21.949803   47919 logs.go:276] 0 containers: []
	W0229 19:00:21.949815   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:21.949821   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:21.949877   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:21.989663   47919 cri.go:89] found id: ""
	I0229 19:00:21.989692   47919 logs.go:276] 0 containers: []
	W0229 19:00:21.989701   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:21.989706   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:21.989750   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:22.040777   47919 cri.go:89] found id: ""
	I0229 19:00:22.040803   47919 logs.go:276] 0 containers: []
	W0229 19:00:22.040813   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:22.040820   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:22.040876   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:22.100661   47919 cri.go:89] found id: ""
	I0229 19:00:22.100682   47919 logs.go:276] 0 containers: []
	W0229 19:00:22.100689   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:22.100697   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:22.100707   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:22.165652   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:22.165682   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:22.180278   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:22.180301   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:22.250220   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:22.250242   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:22.250254   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:22.339122   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:22.339160   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:24.894485   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:24.910480   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:24.910555   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:22.086489   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:24.582331   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:23.685199   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:26.185268   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:25.514168   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:28.014178   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:24.949857   47919 cri.go:89] found id: ""
	I0229 19:00:24.949880   47919 logs.go:276] 0 containers: []
	W0229 19:00:24.949891   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:24.949898   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:24.949968   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:24.993325   47919 cri.go:89] found id: ""
	I0229 19:00:24.993355   47919 logs.go:276] 0 containers: []
	W0229 19:00:24.993366   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:24.993374   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:24.993431   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:25.053180   47919 cri.go:89] found id: ""
	I0229 19:00:25.053201   47919 logs.go:276] 0 containers: []
	W0229 19:00:25.053208   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:25.053214   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:25.053269   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:25.105886   47919 cri.go:89] found id: ""
	I0229 19:00:25.105912   47919 logs.go:276] 0 containers: []
	W0229 19:00:25.105919   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:25.105924   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:25.105969   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:25.161860   47919 cri.go:89] found id: ""
	I0229 19:00:25.161889   47919 logs.go:276] 0 containers: []
	W0229 19:00:25.161907   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:25.161918   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:25.161982   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:25.208566   47919 cri.go:89] found id: ""
	I0229 19:00:25.208591   47919 logs.go:276] 0 containers: []
	W0229 19:00:25.208601   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:25.208625   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:25.208690   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:25.252151   47919 cri.go:89] found id: ""
	I0229 19:00:25.252173   47919 logs.go:276] 0 containers: []
	W0229 19:00:25.252183   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:25.252190   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:25.252255   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:25.293860   47919 cri.go:89] found id: ""
	I0229 19:00:25.293892   47919 logs.go:276] 0 containers: []
	W0229 19:00:25.293903   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:25.293913   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:25.293926   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:25.343332   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:25.343367   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:25.357855   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:25.357883   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:25.438031   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:25.438052   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:25.438064   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:25.523752   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:25.523789   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:28.078701   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:28.103422   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:28.103514   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:28.149369   47919 cri.go:89] found id: ""
	I0229 19:00:28.149396   47919 logs.go:276] 0 containers: []
	W0229 19:00:28.149407   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:28.149414   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:28.149481   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:28.191312   47919 cri.go:89] found id: ""
	I0229 19:00:28.191340   47919 logs.go:276] 0 containers: []
	W0229 19:00:28.191350   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:28.191357   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:28.191422   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:28.232257   47919 cri.go:89] found id: ""
	I0229 19:00:28.232283   47919 logs.go:276] 0 containers: []
	W0229 19:00:28.232293   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:28.232301   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:28.232370   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:28.278477   47919 cri.go:89] found id: ""
	I0229 19:00:28.278502   47919 logs.go:276] 0 containers: []
	W0229 19:00:28.278512   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:28.278520   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:28.278580   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:28.319368   47919 cri.go:89] found id: ""
	I0229 19:00:28.319393   47919 logs.go:276] 0 containers: []
	W0229 19:00:28.319401   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:28.319406   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:28.319451   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:28.363604   47919 cri.go:89] found id: ""
	I0229 19:00:28.363628   47919 logs.go:276] 0 containers: []
	W0229 19:00:28.363636   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:28.363642   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:28.363688   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:28.403101   47919 cri.go:89] found id: ""
	I0229 19:00:28.403126   47919 logs.go:276] 0 containers: []
	W0229 19:00:28.403137   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:28.403144   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:28.403203   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:28.443915   47919 cri.go:89] found id: ""
	I0229 19:00:28.443939   47919 logs.go:276] 0 containers: []
	W0229 19:00:28.443949   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:28.443961   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:28.443974   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:28.459084   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:28.459112   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:28.531798   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:28.531827   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:28.531843   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:28.618141   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:28.618182   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:28.664993   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:28.665024   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:26.582801   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:28.584979   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:28.684541   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:31.184185   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:30.014681   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:32.513959   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:31.218793   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:31.234816   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:31.234890   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:31.273656   47919 cri.go:89] found id: ""
	I0229 19:00:31.273684   47919 logs.go:276] 0 containers: []
	W0229 19:00:31.273692   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:31.273698   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:31.273744   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:31.316292   47919 cri.go:89] found id: ""
	I0229 19:00:31.316314   47919 logs.go:276] 0 containers: []
	W0229 19:00:31.316322   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:31.316330   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:31.316391   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:31.356701   47919 cri.go:89] found id: ""
	I0229 19:00:31.356730   47919 logs.go:276] 0 containers: []
	W0229 19:00:31.356742   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:31.356760   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:31.356813   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:31.395796   47919 cri.go:89] found id: ""
	I0229 19:00:31.395822   47919 logs.go:276] 0 containers: []
	W0229 19:00:31.395830   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:31.395835   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:31.395884   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:31.436461   47919 cri.go:89] found id: ""
	I0229 19:00:31.436483   47919 logs.go:276] 0 containers: []
	W0229 19:00:31.436491   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:31.436496   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:31.436543   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:31.482802   47919 cri.go:89] found id: ""
	I0229 19:00:31.482830   47919 logs.go:276] 0 containers: []
	W0229 19:00:31.482840   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:31.482848   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:31.482895   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:31.525897   47919 cri.go:89] found id: ""
	I0229 19:00:31.525930   47919 logs.go:276] 0 containers: []
	W0229 19:00:31.525939   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:31.525949   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:31.526009   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:31.566323   47919 cri.go:89] found id: ""
	I0229 19:00:31.566350   47919 logs.go:276] 0 containers: []
	W0229 19:00:31.566362   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:31.566372   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:31.566388   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:31.618633   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:31.618674   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:31.634144   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:31.634166   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:31.712112   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:31.712136   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:31.712150   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:31.795159   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:31.795190   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:34.365419   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:34.380447   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:34.380521   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:34.422256   47919 cri.go:89] found id: ""
	I0229 19:00:34.422284   47919 logs.go:276] 0 containers: []
	W0229 19:00:34.422295   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:34.422302   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:34.422359   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:34.466548   47919 cri.go:89] found id: ""
	I0229 19:00:34.466578   47919 logs.go:276] 0 containers: []
	W0229 19:00:34.466588   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:34.466596   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:34.466654   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:34.508359   47919 cri.go:89] found id: ""
	I0229 19:00:34.508395   47919 logs.go:276] 0 containers: []
	W0229 19:00:34.508407   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:34.508414   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:34.508482   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:34.551284   47919 cri.go:89] found id: ""
	I0229 19:00:34.551308   47919 logs.go:276] 0 containers: []
	W0229 19:00:34.551319   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:34.551325   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:34.551371   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:34.593360   47919 cri.go:89] found id: ""
	I0229 19:00:34.593385   47919 logs.go:276] 0 containers: []
	W0229 19:00:34.593395   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:34.593403   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:34.593469   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:34.632097   47919 cri.go:89] found id: ""
	I0229 19:00:34.632117   47919 logs.go:276] 0 containers: []
	W0229 19:00:34.632124   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:34.632135   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:34.632180   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:34.679495   47919 cri.go:89] found id: ""
	I0229 19:00:34.679521   47919 logs.go:276] 0 containers: []
	W0229 19:00:34.679529   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:34.679534   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:34.679580   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:34.723322   47919 cri.go:89] found id: ""
	I0229 19:00:34.723351   47919 logs.go:276] 0 containers: []
	W0229 19:00:34.723361   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:34.723371   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:34.723387   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:34.741497   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:34.741525   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:34.833908   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:34.833932   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:34.833944   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:34.927172   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:34.927203   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:31.083690   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:33.583972   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:33.186129   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:35.685350   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:34.514619   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:36.514937   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:39.014137   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:34.980487   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:34.980520   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:37.535829   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:37.551274   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:37.551342   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:37.590225   47919 cri.go:89] found id: ""
	I0229 19:00:37.590263   47919 logs.go:276] 0 containers: []
	W0229 19:00:37.590282   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:37.590289   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:37.590347   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:37.630546   47919 cri.go:89] found id: ""
	I0229 19:00:37.630574   47919 logs.go:276] 0 containers: []
	W0229 19:00:37.630585   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:37.630592   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:37.630651   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:37.676219   47919 cri.go:89] found id: ""
	I0229 19:00:37.676250   47919 logs.go:276] 0 containers: []
	W0229 19:00:37.676261   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:37.676268   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:37.676329   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:37.713689   47919 cri.go:89] found id: ""
	I0229 19:00:37.713712   47919 logs.go:276] 0 containers: []
	W0229 19:00:37.713721   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:37.713729   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:37.713791   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:37.767999   47919 cri.go:89] found id: ""
	I0229 19:00:37.768034   47919 logs.go:276] 0 containers: []
	W0229 19:00:37.768049   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:37.768057   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:37.768114   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:37.816836   47919 cri.go:89] found id: ""
	I0229 19:00:37.816865   47919 logs.go:276] 0 containers: []
	W0229 19:00:37.816876   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:37.816884   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:37.816948   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:37.876044   47919 cri.go:89] found id: ""
	I0229 19:00:37.876072   47919 logs.go:276] 0 containers: []
	W0229 19:00:37.876084   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:37.876091   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:37.876151   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:37.926075   47919 cri.go:89] found id: ""
	I0229 19:00:37.926110   47919 logs.go:276] 0 containers: []
	W0229 19:00:37.926122   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:37.926132   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:37.926147   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:38.004621   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:38.004648   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:38.004663   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:38.091456   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:38.091493   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:38.140118   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:38.140144   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:38.197206   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:38.197243   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:35.587937   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:38.082516   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:40.083269   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:38.184999   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:40.684029   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:42.684537   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:41.016248   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:43.018730   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:40.713817   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:40.731550   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:40.731613   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:40.787760   47919 cri.go:89] found id: ""
	I0229 19:00:40.787788   47919 logs.go:276] 0 containers: []
	W0229 19:00:40.787798   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:40.787806   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:40.787868   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:40.847842   47919 cri.go:89] found id: ""
	I0229 19:00:40.847870   47919 logs.go:276] 0 containers: []
	W0229 19:00:40.847881   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:40.847888   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:40.847956   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:40.888452   47919 cri.go:89] found id: ""
	I0229 19:00:40.888481   47919 logs.go:276] 0 containers: []
	W0229 19:00:40.888493   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:40.888501   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:40.888562   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:40.927727   47919 cri.go:89] found id: ""
	I0229 19:00:40.927749   47919 logs.go:276] 0 containers: []
	W0229 19:00:40.927757   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:40.927762   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:40.927821   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:40.967696   47919 cri.go:89] found id: ""
	I0229 19:00:40.967725   47919 logs.go:276] 0 containers: []
	W0229 19:00:40.967737   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:40.967745   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:40.967804   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:41.008092   47919 cri.go:89] found id: ""
	I0229 19:00:41.008117   47919 logs.go:276] 0 containers: []
	W0229 19:00:41.008127   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:41.008135   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:41.008190   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:41.049235   47919 cri.go:89] found id: ""
	I0229 19:00:41.049265   47919 logs.go:276] 0 containers: []
	W0229 19:00:41.049277   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:41.049285   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:41.049393   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:41.092962   47919 cri.go:89] found id: ""
	I0229 19:00:41.092988   47919 logs.go:276] 0 containers: []
	W0229 19:00:41.092999   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:41.093018   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:41.093033   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:41.146322   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:41.146368   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:41.161961   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:41.161986   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:41.248674   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:41.248705   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:41.248732   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:41.333647   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:41.333689   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:43.882007   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:43.897786   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:43.897860   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:43.943918   47919 cri.go:89] found id: ""
	I0229 19:00:43.943946   47919 logs.go:276] 0 containers: []
	W0229 19:00:43.943955   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:43.943960   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:43.944010   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:43.988622   47919 cri.go:89] found id: ""
	I0229 19:00:43.988643   47919 logs.go:276] 0 containers: []
	W0229 19:00:43.988650   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:43.988655   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:43.988699   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:44.036419   47919 cri.go:89] found id: ""
	I0229 19:00:44.036455   47919 logs.go:276] 0 containers: []
	W0229 19:00:44.036466   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:44.036471   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:44.036530   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:44.078018   47919 cri.go:89] found id: ""
	I0229 19:00:44.078046   47919 logs.go:276] 0 containers: []
	W0229 19:00:44.078056   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:44.078063   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:44.078119   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:44.116142   47919 cri.go:89] found id: ""
	I0229 19:00:44.116168   47919 logs.go:276] 0 containers: []
	W0229 19:00:44.116177   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:44.116183   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:44.116243   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:44.158804   47919 cri.go:89] found id: ""
	I0229 19:00:44.158826   47919 logs.go:276] 0 containers: []
	W0229 19:00:44.158833   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:44.158839   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:44.158889   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:44.204069   47919 cri.go:89] found id: ""
	I0229 19:00:44.204096   47919 logs.go:276] 0 containers: []
	W0229 19:00:44.204106   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:44.204114   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:44.204173   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:44.247904   47919 cri.go:89] found id: ""
	I0229 19:00:44.247935   47919 logs.go:276] 0 containers: []
	W0229 19:00:44.247949   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:44.247959   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:44.247973   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:44.338653   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:44.338690   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:44.384041   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:44.384069   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:44.439539   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:44.439575   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:44.455345   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:44.455372   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:44.538204   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:42.083656   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:44.584493   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:45.184119   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:47.684925   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:45.513638   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:48.014638   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:47.038895   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:47.054457   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:47.054539   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:47.099854   47919 cri.go:89] found id: ""
	I0229 19:00:47.099879   47919 logs.go:276] 0 containers: []
	W0229 19:00:47.099890   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:47.099899   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:47.099956   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:47.141354   47919 cri.go:89] found id: ""
	I0229 19:00:47.141381   47919 logs.go:276] 0 containers: []
	W0229 19:00:47.141391   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:47.141398   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:47.141454   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:47.181906   47919 cri.go:89] found id: ""
	I0229 19:00:47.181932   47919 logs.go:276] 0 containers: []
	W0229 19:00:47.181942   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:47.181949   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:47.182003   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:47.222505   47919 cri.go:89] found id: ""
	I0229 19:00:47.222530   47919 logs.go:276] 0 containers: []
	W0229 19:00:47.222538   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:47.222548   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:47.222603   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:47.265567   47919 cri.go:89] found id: ""
	I0229 19:00:47.265604   47919 logs.go:276] 0 containers: []
	W0229 19:00:47.265616   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:47.265625   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:47.265690   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:47.304698   47919 cri.go:89] found id: ""
	I0229 19:00:47.304723   47919 logs.go:276] 0 containers: []
	W0229 19:00:47.304730   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:47.304736   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:47.304781   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:47.344154   47919 cri.go:89] found id: ""
	I0229 19:00:47.344175   47919 logs.go:276] 0 containers: []
	W0229 19:00:47.344182   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:47.344187   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:47.344230   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:47.383849   47919 cri.go:89] found id: ""
	I0229 19:00:47.383878   47919 logs.go:276] 0 containers: []
	W0229 19:00:47.383889   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:47.383900   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:47.383915   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:47.458895   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:47.458914   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:47.458933   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:47.547776   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:47.547823   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:47.622606   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:47.622639   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:47.685327   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:47.685356   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:47.084225   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:49.584008   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:50.186274   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:52.684452   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:50.014671   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:52.514321   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:50.202151   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:50.218008   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:50.218063   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:50.265322   47919 cri.go:89] found id: ""
	I0229 19:00:50.265345   47919 logs.go:276] 0 containers: []
	W0229 19:00:50.265353   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:50.265358   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:50.265424   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:50.305646   47919 cri.go:89] found id: ""
	I0229 19:00:50.305669   47919 logs.go:276] 0 containers: []
	W0229 19:00:50.305677   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:50.305682   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:50.305732   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:50.342855   47919 cri.go:89] found id: ""
	I0229 19:00:50.342885   47919 logs.go:276] 0 containers: []
	W0229 19:00:50.342894   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:50.342899   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:50.342948   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:50.385365   47919 cri.go:89] found id: ""
	I0229 19:00:50.385396   47919 logs.go:276] 0 containers: []
	W0229 19:00:50.385404   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:50.385410   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:50.385456   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:50.425212   47919 cri.go:89] found id: ""
	I0229 19:00:50.425238   47919 logs.go:276] 0 containers: []
	W0229 19:00:50.425256   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:50.425263   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:50.425321   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:50.465325   47919 cri.go:89] found id: ""
	I0229 19:00:50.465355   47919 logs.go:276] 0 containers: []
	W0229 19:00:50.465366   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:50.465382   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:50.465455   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:50.516256   47919 cri.go:89] found id: ""
	I0229 19:00:50.516282   47919 logs.go:276] 0 containers: []
	W0229 19:00:50.516291   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:50.516297   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:50.516355   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:50.562233   47919 cri.go:89] found id: ""
	I0229 19:00:50.562262   47919 logs.go:276] 0 containers: []
	W0229 19:00:50.562272   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:50.562280   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:50.562292   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:50.660311   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:50.660346   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:50.702790   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:50.702815   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:50.752085   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:50.752123   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:50.768346   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:50.768378   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:50.842567   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:53.343011   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:53.358002   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:53.358072   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:53.398397   47919 cri.go:89] found id: ""
	I0229 19:00:53.398424   47919 logs.go:276] 0 containers: []
	W0229 19:00:53.398433   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:53.398440   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:53.398501   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:53.437020   47919 cri.go:89] found id: ""
	I0229 19:00:53.437048   47919 logs.go:276] 0 containers: []
	W0229 19:00:53.437059   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:53.437067   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:53.437116   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:53.473350   47919 cri.go:89] found id: ""
	I0229 19:00:53.473377   47919 logs.go:276] 0 containers: []
	W0229 19:00:53.473388   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:53.473395   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:53.473454   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:53.525678   47919 cri.go:89] found id: ""
	I0229 19:00:53.525701   47919 logs.go:276] 0 containers: []
	W0229 19:00:53.525708   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:53.525716   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:53.525772   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:53.595411   47919 cri.go:89] found id: ""
	I0229 19:00:53.595437   47919 logs.go:276] 0 containers: []
	W0229 19:00:53.595448   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:53.595456   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:53.595518   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:53.635890   47919 cri.go:89] found id: ""
	I0229 19:00:53.635916   47919 logs.go:276] 0 containers: []
	W0229 19:00:53.635923   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:53.635929   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:53.635992   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:53.674966   47919 cri.go:89] found id: ""
	I0229 19:00:53.674992   47919 logs.go:276] 0 containers: []
	W0229 19:00:53.675000   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:53.675005   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:53.675076   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:53.713839   47919 cri.go:89] found id: ""
	I0229 19:00:53.713860   47919 logs.go:276] 0 containers: []
	W0229 19:00:53.713868   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:53.713882   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:53.713896   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:53.765185   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:53.765219   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:53.780830   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:53.780855   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:53.858528   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:53.858552   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:53.858567   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:53.936002   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:53.936034   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:52.085082   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:54.583306   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:55.184645   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:57.684780   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:55.015395   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:57.015941   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:59.017683   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:56.481406   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:56.498980   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:56.499059   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:56.557482   47919 cri.go:89] found id: ""
	I0229 19:00:56.557509   47919 logs.go:276] 0 containers: []
	W0229 19:00:56.557520   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:56.557528   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:56.557587   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:56.625912   47919 cri.go:89] found id: ""
	I0229 19:00:56.625941   47919 logs.go:276] 0 containers: []
	W0229 19:00:56.625952   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:56.625964   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:56.626023   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:56.663104   47919 cri.go:89] found id: ""
	I0229 19:00:56.663193   47919 logs.go:276] 0 containers: []
	W0229 19:00:56.663210   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:56.663217   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:56.663265   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:56.707473   47919 cri.go:89] found id: ""
	I0229 19:00:56.707494   47919 logs.go:276] 0 containers: []
	W0229 19:00:56.707502   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:56.707507   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:56.707564   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:56.752569   47919 cri.go:89] found id: ""
	I0229 19:00:56.752593   47919 logs.go:276] 0 containers: []
	W0229 19:00:56.752604   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:56.752611   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:56.752673   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:56.793618   47919 cri.go:89] found id: ""
	I0229 19:00:56.793660   47919 logs.go:276] 0 containers: []
	W0229 19:00:56.793672   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:56.793680   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:56.793741   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:56.833215   47919 cri.go:89] found id: ""
	I0229 19:00:56.833241   47919 logs.go:276] 0 containers: []
	W0229 19:00:56.833252   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:56.833259   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:56.833319   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:56.873162   47919 cri.go:89] found id: ""
	I0229 19:00:56.873187   47919 logs.go:276] 0 containers: []
	W0229 19:00:56.873195   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:56.873203   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:56.873219   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:56.887683   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:56.887707   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:56.957351   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:56.957369   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:56.957380   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:57.042415   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:57.042449   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:57.087636   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:57.087660   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:59.637662   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:59.652747   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:59.652815   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:59.692780   47919 cri.go:89] found id: ""
	I0229 19:00:59.692801   47919 logs.go:276] 0 containers: []
	W0229 19:00:59.692809   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:59.692814   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:59.692891   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:59.733445   47919 cri.go:89] found id: ""
	I0229 19:00:59.733474   47919 logs.go:276] 0 containers: []
	W0229 19:00:59.733482   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:59.733488   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:59.733535   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:59.769723   47919 cri.go:89] found id: ""
	I0229 19:00:59.769754   47919 logs.go:276] 0 containers: []
	W0229 19:00:59.769764   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:59.769770   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:59.769828   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:59.807810   47919 cri.go:89] found id: ""
	I0229 19:00:59.807837   47919 logs.go:276] 0 containers: []
	W0229 19:00:59.807848   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:59.807855   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:59.807916   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:59.849623   47919 cri.go:89] found id: ""
	I0229 19:00:59.849649   47919 logs.go:276] 0 containers: []
	W0229 19:00:59.849659   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:59.849666   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:59.849730   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:59.895593   47919 cri.go:89] found id: ""
	I0229 19:00:59.895620   47919 logs.go:276] 0 containers: []
	W0229 19:00:59.895631   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:59.895638   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:59.895698   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:59.935693   47919 cri.go:89] found id: ""
	I0229 19:00:59.935716   47919 logs.go:276] 0 containers: []
	W0229 19:00:59.935724   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:59.935729   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:59.935786   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:56.585093   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:59.083485   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:59.687672   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:02.184276   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:01.027786   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:03.514296   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:59.977655   47919 cri.go:89] found id: ""
	I0229 19:00:59.977685   47919 logs.go:276] 0 containers: []
	W0229 19:00:59.977693   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:59.977710   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:59.977725   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:59.992518   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:59.992545   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:00.075660   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:00.075679   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:00.075691   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:00.162338   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:00.162384   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:00.207000   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:00.207049   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:02.759942   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:02.776225   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:02.776293   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:02.812511   47919 cri.go:89] found id: ""
	I0229 19:01:02.812538   47919 logs.go:276] 0 containers: []
	W0229 19:01:02.812549   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:02.812556   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:02.812614   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:02.851417   47919 cri.go:89] found id: ""
	I0229 19:01:02.851448   47919 logs.go:276] 0 containers: []
	W0229 19:01:02.851467   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:02.851483   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:02.851560   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:02.894440   47919 cri.go:89] found id: ""
	I0229 19:01:02.894465   47919 logs.go:276] 0 containers: []
	W0229 19:01:02.894475   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:02.894487   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:02.894542   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:02.931046   47919 cri.go:89] found id: ""
	I0229 19:01:02.931075   47919 logs.go:276] 0 containers: []
	W0229 19:01:02.931084   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:02.931092   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:02.931150   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:02.971204   47919 cri.go:89] found id: ""
	I0229 19:01:02.971226   47919 logs.go:276] 0 containers: []
	W0229 19:01:02.971233   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:02.971238   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:02.971307   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:03.011695   47919 cri.go:89] found id: ""
	I0229 19:01:03.011723   47919 logs.go:276] 0 containers: []
	W0229 19:01:03.011734   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:03.011741   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:03.011796   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:03.054738   47919 cri.go:89] found id: ""
	I0229 19:01:03.054763   47919 logs.go:276] 0 containers: []
	W0229 19:01:03.054775   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:03.054782   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:03.054857   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:03.099242   47919 cri.go:89] found id: ""
	I0229 19:01:03.099267   47919 logs.go:276] 0 containers: []
	W0229 19:01:03.099278   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:03.099289   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:03.099303   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:03.148748   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:03.148778   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:03.164550   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:03.164578   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:03.241564   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:03.241586   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:03.241601   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:03.329350   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:03.329384   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:01.085890   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:03.582960   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:04.683846   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:06.684979   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:05.514444   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:08.014275   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:05.884415   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:05.901979   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:05.902044   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:05.946382   47919 cri.go:89] found id: ""
	I0229 19:01:05.946407   47919 logs.go:276] 0 containers: []
	W0229 19:01:05.946415   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:05.946421   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:05.946488   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:05.991783   47919 cri.go:89] found id: ""
	I0229 19:01:05.991807   47919 logs.go:276] 0 containers: []
	W0229 19:01:05.991816   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:05.991822   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:05.991879   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:06.034390   47919 cri.go:89] found id: ""
	I0229 19:01:06.034417   47919 logs.go:276] 0 containers: []
	W0229 19:01:06.034426   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:06.034431   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:06.034475   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:06.078417   47919 cri.go:89] found id: ""
	I0229 19:01:06.078445   47919 logs.go:276] 0 containers: []
	W0229 19:01:06.078456   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:06.078463   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:06.078527   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:06.119892   47919 cri.go:89] found id: ""
	I0229 19:01:06.119927   47919 logs.go:276] 0 containers: []
	W0229 19:01:06.119938   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:06.119952   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:06.120008   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:06.159308   47919 cri.go:89] found id: ""
	I0229 19:01:06.159332   47919 logs.go:276] 0 containers: []
	W0229 19:01:06.159339   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:06.159346   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:06.159410   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:06.208715   47919 cri.go:89] found id: ""
	I0229 19:01:06.208742   47919 logs.go:276] 0 containers: []
	W0229 19:01:06.208751   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:06.208756   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:06.208812   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:06.253831   47919 cri.go:89] found id: ""
	I0229 19:01:06.253858   47919 logs.go:276] 0 containers: []
	W0229 19:01:06.253866   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:06.253881   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:06.253895   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:06.315105   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:06.315141   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:06.349340   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:06.349386   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:06.431456   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:06.431477   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:06.431492   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:06.517754   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:06.517783   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:09.064267   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:09.078751   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:09.078822   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:09.130371   47919 cri.go:89] found id: ""
	I0229 19:01:09.130396   47919 logs.go:276] 0 containers: []
	W0229 19:01:09.130404   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:09.130410   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:09.130461   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:09.166312   47919 cri.go:89] found id: ""
	I0229 19:01:09.166340   47919 logs.go:276] 0 containers: []
	W0229 19:01:09.166351   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:09.166359   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:09.166415   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:09.202957   47919 cri.go:89] found id: ""
	I0229 19:01:09.202978   47919 logs.go:276] 0 containers: []
	W0229 19:01:09.202985   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:09.202991   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:09.203050   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:09.242350   47919 cri.go:89] found id: ""
	I0229 19:01:09.242380   47919 logs.go:276] 0 containers: []
	W0229 19:01:09.242391   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:09.242399   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:09.242455   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:09.300471   47919 cri.go:89] found id: ""
	I0229 19:01:09.300492   47919 logs.go:276] 0 containers: []
	W0229 19:01:09.300500   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:09.300505   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:09.300568   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:09.356861   47919 cri.go:89] found id: ""
	I0229 19:01:09.356886   47919 logs.go:276] 0 containers: []
	W0229 19:01:09.356893   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:09.356898   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:09.356965   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:09.411042   47919 cri.go:89] found id: ""
	I0229 19:01:09.411067   47919 logs.go:276] 0 containers: []
	W0229 19:01:09.411075   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:09.411080   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:09.411136   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:09.446312   47919 cri.go:89] found id: ""
	I0229 19:01:09.446336   47919 logs.go:276] 0 containers: []
	W0229 19:01:09.446347   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:09.446356   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:09.446367   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:09.492195   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:09.492227   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:09.541943   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:09.541973   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:09.557347   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:09.557373   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:09.635319   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:09.635363   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:09.635379   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:05.584255   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:08.082899   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:10.083808   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:09.189158   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:11.684731   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:10.513801   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:12.514492   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:12.224271   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:12.243330   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:12.243403   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:12.285525   47919 cri.go:89] found id: ""
	I0229 19:01:12.285547   47919 logs.go:276] 0 containers: []
	W0229 19:01:12.285556   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:12.285561   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:12.285617   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:12.347511   47919 cri.go:89] found id: ""
	I0229 19:01:12.347535   47919 logs.go:276] 0 containers: []
	W0229 19:01:12.347543   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:12.347548   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:12.347593   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:12.392145   47919 cri.go:89] found id: ""
	I0229 19:01:12.392207   47919 logs.go:276] 0 containers: []
	W0229 19:01:12.392231   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:12.392248   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:12.392366   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:12.430238   47919 cri.go:89] found id: ""
	I0229 19:01:12.430268   47919 logs.go:276] 0 containers: []
	W0229 19:01:12.430278   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:12.430286   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:12.430345   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:12.473019   47919 cri.go:89] found id: ""
	I0229 19:01:12.473054   47919 logs.go:276] 0 containers: []
	W0229 19:01:12.473065   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:12.473072   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:12.473131   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:12.510653   47919 cri.go:89] found id: ""
	I0229 19:01:12.510681   47919 logs.go:276] 0 containers: []
	W0229 19:01:12.510692   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:12.510699   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:12.510759   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:12.548137   47919 cri.go:89] found id: ""
	I0229 19:01:12.548163   47919 logs.go:276] 0 containers: []
	W0229 19:01:12.548171   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:12.548176   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:12.548232   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:12.588416   47919 cri.go:89] found id: ""
	I0229 19:01:12.588435   47919 logs.go:276] 0 containers: []
	W0229 19:01:12.588443   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:12.588452   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:12.588467   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:12.603651   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:12.603681   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:12.681060   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:12.681081   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:12.681094   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:12.764839   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:12.764870   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:12.807178   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:12.807202   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:12.583319   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:14.583681   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:14.184569   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:16.185919   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:14.514955   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:17.014358   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:19.016452   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:15.357205   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:15.382491   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:15.382571   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:15.422538   47919 cri.go:89] found id: ""
	I0229 19:01:15.422561   47919 logs.go:276] 0 containers: []
	W0229 19:01:15.422568   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:15.422577   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:15.422635   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:15.464564   47919 cri.go:89] found id: ""
	I0229 19:01:15.464593   47919 logs.go:276] 0 containers: []
	W0229 19:01:15.464601   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:15.464607   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:15.464662   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:15.502625   47919 cri.go:89] found id: ""
	I0229 19:01:15.502650   47919 logs.go:276] 0 containers: []
	W0229 19:01:15.502662   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:15.502669   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:15.502724   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:15.543187   47919 cri.go:89] found id: ""
	I0229 19:01:15.543215   47919 logs.go:276] 0 containers: []
	W0229 19:01:15.543229   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:15.543234   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:15.543283   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:15.585273   47919 cri.go:89] found id: ""
	I0229 19:01:15.585296   47919 logs.go:276] 0 containers: []
	W0229 19:01:15.585306   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:15.585314   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:15.585386   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:15.626180   47919 cri.go:89] found id: ""
	I0229 19:01:15.626208   47919 logs.go:276] 0 containers: []
	W0229 19:01:15.626219   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:15.626227   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:15.626288   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:15.670572   47919 cri.go:89] found id: ""
	I0229 19:01:15.670596   47919 logs.go:276] 0 containers: []
	W0229 19:01:15.670604   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:15.670610   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:15.670657   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:15.710549   47919 cri.go:89] found id: ""
	I0229 19:01:15.710587   47919 logs.go:276] 0 containers: []
	W0229 19:01:15.710595   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:15.710604   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:15.710618   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:15.765148   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:15.765180   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:15.780717   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:15.780742   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:15.852811   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:15.852835   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:15.852856   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:15.930728   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:15.930759   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:18.483798   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:18.497545   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:18.497611   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:18.540226   47919 cri.go:89] found id: ""
	I0229 19:01:18.540256   47919 logs.go:276] 0 containers: []
	W0229 19:01:18.540266   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:18.540274   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:18.540336   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:18.578106   47919 cri.go:89] found id: ""
	I0229 19:01:18.578124   47919 logs.go:276] 0 containers: []
	W0229 19:01:18.578134   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:18.578142   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:18.578192   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:18.617138   47919 cri.go:89] found id: ""
	I0229 19:01:18.617167   47919 logs.go:276] 0 containers: []
	W0229 19:01:18.617178   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:18.617185   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:18.617242   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:18.654667   47919 cri.go:89] found id: ""
	I0229 19:01:18.654762   47919 logs.go:276] 0 containers: []
	W0229 19:01:18.654779   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:18.654787   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:18.654845   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:18.695837   47919 cri.go:89] found id: ""
	I0229 19:01:18.695859   47919 logs.go:276] 0 containers: []
	W0229 19:01:18.695866   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:18.695875   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:18.695929   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:18.738178   47919 cri.go:89] found id: ""
	I0229 19:01:18.738199   47919 logs.go:276] 0 containers: []
	W0229 19:01:18.738206   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:18.738211   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:18.738259   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:18.777018   47919 cri.go:89] found id: ""
	I0229 19:01:18.777044   47919 logs.go:276] 0 containers: []
	W0229 19:01:18.777052   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:18.777058   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:18.777102   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:18.820701   47919 cri.go:89] found id: ""
	I0229 19:01:18.820723   47919 logs.go:276] 0 containers: []
	W0229 19:01:18.820734   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:18.820746   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:18.820762   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:18.907150   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:18.907182   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:18.950363   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:18.950393   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:18.999446   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:18.999479   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:19.020681   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:19.020714   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:19.139305   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:17.083357   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:19.087286   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:18.684811   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:20.684974   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:22.685289   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:21.513256   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:23.513492   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:21.640062   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:21.654739   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:21.654799   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:21.701885   47919 cri.go:89] found id: ""
	I0229 19:01:21.701912   47919 logs.go:276] 0 containers: []
	W0229 19:01:21.701921   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:21.701929   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:21.701987   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:21.746736   47919 cri.go:89] found id: ""
	I0229 19:01:21.746767   47919 logs.go:276] 0 containers: []
	W0229 19:01:21.746780   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:21.746787   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:21.746847   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:21.784830   47919 cri.go:89] found id: ""
	I0229 19:01:21.784851   47919 logs.go:276] 0 containers: []
	W0229 19:01:21.784859   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:21.784865   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:21.784911   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:21.824122   47919 cri.go:89] found id: ""
	I0229 19:01:21.824151   47919 logs.go:276] 0 containers: []
	W0229 19:01:21.824162   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:21.824171   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:21.824217   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:21.869937   47919 cri.go:89] found id: ""
	I0229 19:01:21.869967   47919 logs.go:276] 0 containers: []
	W0229 19:01:21.869979   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:21.869986   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:21.870043   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:21.909902   47919 cri.go:89] found id: ""
	I0229 19:01:21.909928   47919 logs.go:276] 0 containers: []
	W0229 19:01:21.909939   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:21.909946   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:21.910005   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:21.953980   47919 cri.go:89] found id: ""
	I0229 19:01:21.954021   47919 logs.go:276] 0 containers: []
	W0229 19:01:21.954033   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:21.954040   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:21.954108   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:21.997483   47919 cri.go:89] found id: ""
	I0229 19:01:21.997510   47919 logs.go:276] 0 containers: []
	W0229 19:01:21.997521   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:21.997531   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:21.997546   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:22.108610   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:22.108639   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:22.153571   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:22.153596   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:22.204525   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:22.204555   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:22.219217   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:22.219241   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:22.294794   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:24.795157   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:24.811292   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:24.811363   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:24.854354   47919 cri.go:89] found id: ""
	I0229 19:01:24.854387   47919 logs.go:276] 0 containers: []
	W0229 19:01:24.854396   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:24.854402   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:24.854455   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:24.890800   47919 cri.go:89] found id: ""
	I0229 19:01:24.890828   47919 logs.go:276] 0 containers: []
	W0229 19:01:24.890838   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:24.890844   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:24.890900   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:24.930961   47919 cri.go:89] found id: ""
	I0229 19:01:24.930983   47919 logs.go:276] 0 containers: []
	W0229 19:01:24.930991   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:24.931001   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:24.931073   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:21.582702   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:23.584665   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:25.185732   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:27.683784   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:25.513886   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:28.016852   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:24.968719   47919 cri.go:89] found id: ""
	I0229 19:01:24.968740   47919 logs.go:276] 0 containers: []
	W0229 19:01:24.968747   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:24.968752   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:24.968809   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:25.012723   47919 cri.go:89] found id: ""
	I0229 19:01:25.012746   47919 logs.go:276] 0 containers: []
	W0229 19:01:25.012756   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:25.012763   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:25.012821   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:25.064388   47919 cri.go:89] found id: ""
	I0229 19:01:25.064412   47919 logs.go:276] 0 containers: []
	W0229 19:01:25.064422   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:25.064435   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:25.064496   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:25.122256   47919 cri.go:89] found id: ""
	I0229 19:01:25.122277   47919 logs.go:276] 0 containers: []
	W0229 19:01:25.122286   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:25.122291   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:25.122335   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:25.165487   47919 cri.go:89] found id: ""
	I0229 19:01:25.165515   47919 logs.go:276] 0 containers: []
	W0229 19:01:25.165526   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:25.165536   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:25.165557   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:25.249294   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:25.249333   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:25.297013   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:25.297048   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:25.346276   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:25.346309   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:25.362604   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:25.362635   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:25.434586   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:27.935727   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:27.950680   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:27.950750   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:27.989253   47919 cri.go:89] found id: ""
	I0229 19:01:27.989282   47919 logs.go:276] 0 containers: []
	W0229 19:01:27.989293   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:27.989300   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:27.989357   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:28.039714   47919 cri.go:89] found id: ""
	I0229 19:01:28.039741   47919 logs.go:276] 0 containers: []
	W0229 19:01:28.039750   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:28.039763   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:28.039828   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:28.102860   47919 cri.go:89] found id: ""
	I0229 19:01:28.102886   47919 logs.go:276] 0 containers: []
	W0229 19:01:28.102897   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:28.102904   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:28.102971   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:28.160075   47919 cri.go:89] found id: ""
	I0229 19:01:28.160097   47919 logs.go:276] 0 containers: []
	W0229 19:01:28.160104   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:28.160110   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:28.160180   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:28.200297   47919 cri.go:89] found id: ""
	I0229 19:01:28.200317   47919 logs.go:276] 0 containers: []
	W0229 19:01:28.200325   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:28.200330   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:28.200393   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:28.239912   47919 cri.go:89] found id: ""
	I0229 19:01:28.239944   47919 logs.go:276] 0 containers: []
	W0229 19:01:28.239955   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:28.239963   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:28.240018   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:28.278525   47919 cri.go:89] found id: ""
	I0229 19:01:28.278550   47919 logs.go:276] 0 containers: []
	W0229 19:01:28.278558   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:28.278564   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:28.278617   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:28.315659   47919 cri.go:89] found id: ""
	I0229 19:01:28.315685   47919 logs.go:276] 0 containers: []
	W0229 19:01:28.315693   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:28.315703   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:28.315716   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:28.330102   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:28.330127   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:28.402474   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:28.402497   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:28.402513   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:28.486271   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:28.486308   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:28.531888   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:28.531918   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:26.083338   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:28.083983   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:30.085481   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:29.684229   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:32.184054   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:30.513642   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:32.514405   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:31.082385   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:31.122771   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:31.122844   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:31.165097   47919 cri.go:89] found id: ""
	I0229 19:01:31.165127   47919 logs.go:276] 0 containers: []
	W0229 19:01:31.165138   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:31.165148   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:31.165215   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:31.209449   47919 cri.go:89] found id: ""
	I0229 19:01:31.209482   47919 logs.go:276] 0 containers: []
	W0229 19:01:31.209492   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:31.209498   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:31.209559   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:31.249660   47919 cri.go:89] found id: ""
	I0229 19:01:31.249687   47919 logs.go:276] 0 containers: []
	W0229 19:01:31.249698   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:31.249705   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:31.249770   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:31.299268   47919 cri.go:89] found id: ""
	I0229 19:01:31.299292   47919 logs.go:276] 0 containers: []
	W0229 19:01:31.299301   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:31.299308   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:31.299363   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:31.339078   47919 cri.go:89] found id: ""
	I0229 19:01:31.339111   47919 logs.go:276] 0 containers: []
	W0229 19:01:31.339123   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:31.339131   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:31.339194   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:31.378548   47919 cri.go:89] found id: ""
	I0229 19:01:31.378576   47919 logs.go:276] 0 containers: []
	W0229 19:01:31.378587   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:31.378595   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:31.378654   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:31.418744   47919 cri.go:89] found id: ""
	I0229 19:01:31.418780   47919 logs.go:276] 0 containers: []
	W0229 19:01:31.418812   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:31.418824   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:31.418889   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:31.464078   47919 cri.go:89] found id: ""
	I0229 19:01:31.464103   47919 logs.go:276] 0 containers: []
	W0229 19:01:31.464113   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:31.464124   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:31.464138   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:31.516406   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:31.516434   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:31.531504   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:31.531527   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:31.607391   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:31.607413   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:31.607426   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:31.691582   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:31.691609   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:34.233205   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:34.250283   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:34.250345   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:34.294588   47919 cri.go:89] found id: ""
	I0229 19:01:34.294620   47919 logs.go:276] 0 containers: []
	W0229 19:01:34.294631   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:34.294639   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:34.294712   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:34.337033   47919 cri.go:89] found id: ""
	I0229 19:01:34.337061   47919 logs.go:276] 0 containers: []
	W0229 19:01:34.337071   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:34.337079   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:34.337141   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:34.382800   47919 cri.go:89] found id: ""
	I0229 19:01:34.382831   47919 logs.go:276] 0 containers: []
	W0229 19:01:34.382840   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:34.382845   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:34.382904   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:34.422931   47919 cri.go:89] found id: ""
	I0229 19:01:34.422959   47919 logs.go:276] 0 containers: []
	W0229 19:01:34.422970   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:34.422977   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:34.423059   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:34.469724   47919 cri.go:89] found id: ""
	I0229 19:01:34.469755   47919 logs.go:276] 0 containers: []
	W0229 19:01:34.469765   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:34.469773   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:34.469824   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:34.513428   47919 cri.go:89] found id: ""
	I0229 19:01:34.513461   47919 logs.go:276] 0 containers: []
	W0229 19:01:34.513472   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:34.513479   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:34.513555   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:34.552593   47919 cri.go:89] found id: ""
	I0229 19:01:34.552638   47919 logs.go:276] 0 containers: []
	W0229 19:01:34.552648   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:34.552655   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:34.552717   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:34.596516   47919 cri.go:89] found id: ""
	I0229 19:01:34.596538   47919 logs.go:276] 0 containers: []
	W0229 19:01:34.596546   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:34.596554   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:34.596568   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:34.611782   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:34.611805   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:34.694333   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:34.694352   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:34.694368   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:34.781638   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:34.781669   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:34.832910   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:34.832943   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:32.584363   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:34.585650   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:34.185025   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:36.683723   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:34.515185   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:37.013287   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:39.014417   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:37.398458   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:37.415617   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:37.415696   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:37.455390   47919 cri.go:89] found id: ""
	I0229 19:01:37.455421   47919 logs.go:276] 0 containers: []
	W0229 19:01:37.455433   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:37.455440   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:37.455501   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:37.498869   47919 cri.go:89] found id: ""
	I0229 19:01:37.498890   47919 logs.go:276] 0 containers: []
	W0229 19:01:37.498901   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:37.498909   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:37.498972   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:37.538928   47919 cri.go:89] found id: ""
	I0229 19:01:37.538952   47919 logs.go:276] 0 containers: []
	W0229 19:01:37.538960   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:37.538966   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:37.539012   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:37.577278   47919 cri.go:89] found id: ""
	I0229 19:01:37.577299   47919 logs.go:276] 0 containers: []
	W0229 19:01:37.577310   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:37.577317   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:37.577372   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:37.620313   47919 cri.go:89] found id: ""
	I0229 19:01:37.620342   47919 logs.go:276] 0 containers: []
	W0229 19:01:37.620352   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:37.620359   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:37.620420   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:37.657696   47919 cri.go:89] found id: ""
	I0229 19:01:37.657717   47919 logs.go:276] 0 containers: []
	W0229 19:01:37.657726   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:37.657734   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:37.657792   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:37.698814   47919 cri.go:89] found id: ""
	I0229 19:01:37.698833   47919 logs.go:276] 0 containers: []
	W0229 19:01:37.698841   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:37.698848   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:37.698902   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:37.736438   47919 cri.go:89] found id: ""
	I0229 19:01:37.736469   47919 logs.go:276] 0 containers: []
	W0229 19:01:37.736480   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:37.736490   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:37.736506   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:37.753849   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:37.753871   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:37.854740   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:37.854764   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:37.854783   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:37.943837   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:37.943872   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:37.988180   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:37.988209   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:37.084353   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:39.582760   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:39.183743   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:41.184218   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:41.014652   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:43.014745   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:40.543133   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:40.558453   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:40.558526   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:40.599794   47919 cri.go:89] found id: ""
	I0229 19:01:40.599814   47919 logs.go:276] 0 containers: []
	W0229 19:01:40.599821   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:40.599827   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:40.599874   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:40.641738   47919 cri.go:89] found id: ""
	I0229 19:01:40.641762   47919 logs.go:276] 0 containers: []
	W0229 19:01:40.641769   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:40.641775   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:40.641819   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:40.683905   47919 cri.go:89] found id: ""
	I0229 19:01:40.683935   47919 logs.go:276] 0 containers: []
	W0229 19:01:40.683945   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:40.683953   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:40.684006   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:40.727645   47919 cri.go:89] found id: ""
	I0229 19:01:40.727675   47919 logs.go:276] 0 containers: []
	W0229 19:01:40.727685   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:40.727693   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:40.727754   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:40.785142   47919 cri.go:89] found id: ""
	I0229 19:01:40.785172   47919 logs.go:276] 0 containers: []
	W0229 19:01:40.785192   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:40.785199   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:40.785252   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:40.854534   47919 cri.go:89] found id: ""
	I0229 19:01:40.854560   47919 logs.go:276] 0 containers: []
	W0229 19:01:40.854571   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:40.854580   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:40.854639   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:40.900823   47919 cri.go:89] found id: ""
	I0229 19:01:40.900851   47919 logs.go:276] 0 containers: []
	W0229 19:01:40.900862   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:40.900869   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:40.900928   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:40.938108   47919 cri.go:89] found id: ""
	I0229 19:01:40.938135   47919 logs.go:276] 0 containers: []
	W0229 19:01:40.938146   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:40.938156   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:40.938171   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:40.987452   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:40.987482   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:41.037388   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:41.037417   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:41.051987   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:41.052015   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:41.126077   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:41.126102   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:41.126116   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:43.715745   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:43.730683   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:43.730755   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:43.790637   47919 cri.go:89] found id: ""
	I0229 19:01:43.790665   47919 logs.go:276] 0 containers: []
	W0229 19:01:43.790676   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:43.790682   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:43.790731   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:43.848237   47919 cri.go:89] found id: ""
	I0229 19:01:43.848263   47919 logs.go:276] 0 containers: []
	W0229 19:01:43.848272   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:43.848277   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:43.848337   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:43.897892   47919 cri.go:89] found id: ""
	I0229 19:01:43.897920   47919 logs.go:276] 0 containers: []
	W0229 19:01:43.897928   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:43.897934   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:43.897989   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:43.936068   47919 cri.go:89] found id: ""
	I0229 19:01:43.936089   47919 logs.go:276] 0 containers: []
	W0229 19:01:43.936097   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:43.936102   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:43.936149   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:43.978636   47919 cri.go:89] found id: ""
	I0229 19:01:43.978670   47919 logs.go:276] 0 containers: []
	W0229 19:01:43.978682   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:43.978689   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:43.978751   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:44.018642   47919 cri.go:89] found id: ""
	I0229 19:01:44.018676   47919 logs.go:276] 0 containers: []
	W0229 19:01:44.018684   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:44.018690   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:44.018737   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:44.056237   47919 cri.go:89] found id: ""
	I0229 19:01:44.056267   47919 logs.go:276] 0 containers: []
	W0229 19:01:44.056278   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:44.056285   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:44.056347   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:44.095489   47919 cri.go:89] found id: ""
	I0229 19:01:44.095522   47919 logs.go:276] 0 containers: []
	W0229 19:01:44.095532   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:44.095543   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:44.095557   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:44.139407   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:44.139433   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:44.189893   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:44.189921   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:44.206426   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:44.206449   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:44.285594   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:44.285621   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:44.285638   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:41.584614   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:44.083599   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:43.185509   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:45.683851   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:47.684064   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:45.015082   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:47.017540   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:46.869271   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:46.885267   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:46.885356   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:46.921696   47919 cri.go:89] found id: ""
	I0229 19:01:46.921718   47919 logs.go:276] 0 containers: []
	W0229 19:01:46.921725   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:46.921731   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:46.921789   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:46.960265   47919 cri.go:89] found id: ""
	I0229 19:01:46.960291   47919 logs.go:276] 0 containers: []
	W0229 19:01:46.960302   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:46.960309   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:46.960367   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:46.998035   47919 cri.go:89] found id: ""
	I0229 19:01:46.998062   47919 logs.go:276] 0 containers: []
	W0229 19:01:46.998070   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:46.998075   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:46.998119   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:47.041563   47919 cri.go:89] found id: ""
	I0229 19:01:47.041586   47919 logs.go:276] 0 containers: []
	W0229 19:01:47.041595   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:47.041600   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:47.041643   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:47.084146   47919 cri.go:89] found id: ""
	I0229 19:01:47.084167   47919 logs.go:276] 0 containers: []
	W0229 19:01:47.084174   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:47.084179   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:47.084227   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:47.126813   47919 cri.go:89] found id: ""
	I0229 19:01:47.126835   47919 logs.go:276] 0 containers: []
	W0229 19:01:47.126845   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:47.126853   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:47.126909   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:47.165379   47919 cri.go:89] found id: ""
	I0229 19:01:47.165399   47919 logs.go:276] 0 containers: []
	W0229 19:01:47.165406   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:47.165412   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:47.165454   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:47.204263   47919 cri.go:89] found id: ""
	I0229 19:01:47.204306   47919 logs.go:276] 0 containers: []
	W0229 19:01:47.204316   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:47.204328   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:47.204345   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:47.248848   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:47.248876   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:47.299388   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:47.299416   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:47.314484   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:47.314507   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:47.386231   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:47.386256   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:47.386272   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:46.084527   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:48.085557   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:50.189188   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:52.684126   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:49.513497   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:51.514191   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:53.515909   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:49.965988   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:49.980621   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:49.980700   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:50.025010   47919 cri.go:89] found id: ""
	I0229 19:01:50.025030   47919 logs.go:276] 0 containers: []
	W0229 19:01:50.025037   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:50.025042   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:50.025090   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:50.066947   47919 cri.go:89] found id: ""
	I0229 19:01:50.066976   47919 logs.go:276] 0 containers: []
	W0229 19:01:50.066984   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:50.066990   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:50.067061   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:50.108892   47919 cri.go:89] found id: ""
	I0229 19:01:50.108913   47919 logs.go:276] 0 containers: []
	W0229 19:01:50.108931   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:50.108937   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:50.108997   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:50.149601   47919 cri.go:89] found id: ""
	I0229 19:01:50.149626   47919 logs.go:276] 0 containers: []
	W0229 19:01:50.149636   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:50.149643   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:50.149704   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:50.191881   47919 cri.go:89] found id: ""
	I0229 19:01:50.191908   47919 logs.go:276] 0 containers: []
	W0229 19:01:50.191918   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:50.191925   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:50.191987   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:50.233782   47919 cri.go:89] found id: ""
	I0229 19:01:50.233803   47919 logs.go:276] 0 containers: []
	W0229 19:01:50.233811   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:50.233816   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:50.233870   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:50.274913   47919 cri.go:89] found id: ""
	I0229 19:01:50.274941   47919 logs.go:276] 0 containers: []
	W0229 19:01:50.274950   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:50.274955   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:50.275050   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:50.321924   47919 cri.go:89] found id: ""
	I0229 19:01:50.321945   47919 logs.go:276] 0 containers: []
	W0229 19:01:50.321953   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:50.321967   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:50.321978   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:50.367357   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:50.367388   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:50.417229   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:50.417260   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:50.432031   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:50.432056   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:50.504920   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:50.504942   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:50.504960   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:53.110884   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:53.126947   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:53.127004   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:53.166940   47919 cri.go:89] found id: ""
	I0229 19:01:53.166965   47919 logs.go:276] 0 containers: []
	W0229 19:01:53.166975   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:53.166982   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:53.167054   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:53.205917   47919 cri.go:89] found id: ""
	I0229 19:01:53.205960   47919 logs.go:276] 0 containers: []
	W0229 19:01:53.205968   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:53.205974   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:53.206030   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:53.245547   47919 cri.go:89] found id: ""
	I0229 19:01:53.245577   47919 logs.go:276] 0 containers: []
	W0229 19:01:53.245587   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:53.245595   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:53.245654   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:53.287513   47919 cri.go:89] found id: ""
	I0229 19:01:53.287540   47919 logs.go:276] 0 containers: []
	W0229 19:01:53.287550   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:53.287557   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:53.287617   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:53.329269   47919 cri.go:89] found id: ""
	I0229 19:01:53.329299   47919 logs.go:276] 0 containers: []
	W0229 19:01:53.329310   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:53.329318   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:53.329379   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:53.377438   47919 cri.go:89] found id: ""
	I0229 19:01:53.377467   47919 logs.go:276] 0 containers: []
	W0229 19:01:53.377478   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:53.377485   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:53.377549   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:53.418414   47919 cri.go:89] found id: ""
	I0229 19:01:53.418440   47919 logs.go:276] 0 containers: []
	W0229 19:01:53.418448   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:53.418453   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:53.418514   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:53.458365   47919 cri.go:89] found id: ""
	I0229 19:01:53.458393   47919 logs.go:276] 0 containers: []
	W0229 19:01:53.458402   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:53.458409   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:53.458421   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:53.540710   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:53.540744   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:53.637271   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:53.637302   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:53.687822   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:53.687850   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:53.703482   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:53.703506   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:53.779564   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:50.584198   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:53.082170   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:55.082683   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:54.685554   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:56.685951   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:56.013441   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:58.016917   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:56.280300   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:56.295210   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:56.295295   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:56.336903   47919 cri.go:89] found id: ""
	I0229 19:01:56.336935   47919 logs.go:276] 0 containers: []
	W0229 19:01:56.336945   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:56.336953   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:56.337002   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:56.373300   47919 cri.go:89] found id: ""
	I0229 19:01:56.373322   47919 logs.go:276] 0 containers: []
	W0229 19:01:56.373330   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:56.373338   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:56.373390   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:56.411949   47919 cri.go:89] found id: ""
	I0229 19:01:56.411975   47919 logs.go:276] 0 containers: []
	W0229 19:01:56.411984   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:56.411990   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:56.412047   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:56.453302   47919 cri.go:89] found id: ""
	I0229 19:01:56.453329   47919 logs.go:276] 0 containers: []
	W0229 19:01:56.453339   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:56.453344   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:56.453403   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:56.490543   47919 cri.go:89] found id: ""
	I0229 19:01:56.490565   47919 logs.go:276] 0 containers: []
	W0229 19:01:56.490576   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:56.490582   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:56.490637   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:56.547078   47919 cri.go:89] found id: ""
	I0229 19:01:56.547101   47919 logs.go:276] 0 containers: []
	W0229 19:01:56.547108   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:56.547113   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:56.547171   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:56.598382   47919 cri.go:89] found id: ""
	I0229 19:01:56.598408   47919 logs.go:276] 0 containers: []
	W0229 19:01:56.598417   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:56.598424   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:56.598478   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:56.646090   47919 cri.go:89] found id: ""
	I0229 19:01:56.646117   47919 logs.go:276] 0 containers: []
	W0229 19:01:56.646125   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:56.646134   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:56.646145   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:56.691685   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:56.691711   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:56.742886   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:56.742927   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:56.758326   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:56.758350   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:56.830140   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:56.830160   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:56.830177   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:59.414437   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:59.429710   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:59.429793   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:59.473993   47919 cri.go:89] found id: ""
	I0229 19:01:59.474018   47919 logs.go:276] 0 containers: []
	W0229 19:01:59.474025   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:59.474031   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:59.474091   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:59.529114   47919 cri.go:89] found id: ""
	I0229 19:01:59.529143   47919 logs.go:276] 0 containers: []
	W0229 19:01:59.529157   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:59.529164   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:59.529222   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:59.596624   47919 cri.go:89] found id: ""
	I0229 19:01:59.596654   47919 logs.go:276] 0 containers: []
	W0229 19:01:59.596665   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:59.596672   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:59.596730   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:59.641088   47919 cri.go:89] found id: ""
	I0229 19:01:59.641118   47919 logs.go:276] 0 containers: []
	W0229 19:01:59.641130   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:59.641138   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:59.641198   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:59.682294   47919 cri.go:89] found id: ""
	I0229 19:01:59.682318   47919 logs.go:276] 0 containers: []
	W0229 19:01:59.682327   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:59.682333   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:59.682406   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:59.722881   47919 cri.go:89] found id: ""
	I0229 19:01:59.722902   47919 logs.go:276] 0 containers: []
	W0229 19:01:59.722910   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:59.722915   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:59.722982   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:59.761727   47919 cri.go:89] found id: ""
	I0229 19:01:59.761757   47919 logs.go:276] 0 containers: []
	W0229 19:01:59.761767   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:59.761778   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:59.761839   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:59.805733   47919 cri.go:89] found id: ""
	I0229 19:01:59.805762   47919 logs.go:276] 0 containers: []
	W0229 19:01:59.805772   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:59.805783   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:59.805798   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:59.883702   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:59.883721   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:59.883733   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:57.083166   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:59.085841   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:59.183892   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:01.184393   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:00.513790   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:03.013807   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:59.960649   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:59.960682   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:02:00.012085   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:02:00.012121   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:02:00.065794   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:02:00.065834   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:02:02.583319   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:02:02.603123   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:02:02.603178   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:02:02.654992   47919 cri.go:89] found id: ""
	I0229 19:02:02.655017   47919 logs.go:276] 0 containers: []
	W0229 19:02:02.655046   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:02:02.655053   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:02:02.655103   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:02:02.697067   47919 cri.go:89] found id: ""
	I0229 19:02:02.697098   47919 logs.go:276] 0 containers: []
	W0229 19:02:02.697109   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:02:02.697116   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:02:02.697178   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:02:02.734804   47919 cri.go:89] found id: ""
	I0229 19:02:02.734828   47919 logs.go:276] 0 containers: []
	W0229 19:02:02.734835   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:02:02.734841   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:02:02.734893   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:02:02.778292   47919 cri.go:89] found id: ""
	I0229 19:02:02.778313   47919 logs.go:276] 0 containers: []
	W0229 19:02:02.778321   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:02:02.778328   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:02:02.778382   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:02:02.819431   47919 cri.go:89] found id: ""
	I0229 19:02:02.819458   47919 logs.go:276] 0 containers: []
	W0229 19:02:02.819470   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:02:02.819478   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:02:02.819537   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:02:02.862409   47919 cri.go:89] found id: ""
	I0229 19:02:02.862432   47919 logs.go:276] 0 containers: []
	W0229 19:02:02.862439   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:02:02.862445   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:02:02.862487   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:02:02.902486   47919 cri.go:89] found id: ""
	I0229 19:02:02.902513   47919 logs.go:276] 0 containers: []
	W0229 19:02:02.902521   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:02:02.902526   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:02:02.902571   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:02:02.939408   47919 cri.go:89] found id: ""
	I0229 19:02:02.939436   47919 logs.go:276] 0 containers: []
	W0229 19:02:02.939443   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:02:02.939451   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:02:02.939462   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:02:02.954539   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:02:02.954564   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:02:03.032534   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:02:03.032556   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:02:03.032574   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:02:03.116064   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:02:03.116096   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:02:03.167242   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:02:03.167265   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:02:01.582557   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:03.583876   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:04.576948   47608 pod_ready.go:81] duration metric: took 4m0.00105469s waiting for pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace to be "Ready" ...
	E0229 19:02:04.576996   47608 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace to be "Ready" (will not retry!)
	I0229 19:02:04.577015   47608 pod_ready.go:38] duration metric: took 4m12.91384632s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 19:02:04.577039   47608 kubeadm.go:640] restartCluster took 4m30.900514081s
	W0229 19:02:04.577101   47608 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0229 19:02:04.577137   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0229 19:02:03.684074   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:05.686050   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:07.686409   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:05.014368   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:07.518556   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:05.718312   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:02:05.732879   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:02:05.733012   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:02:05.774525   47919 cri.go:89] found id: ""
	I0229 19:02:05.774557   47919 logs.go:276] 0 containers: []
	W0229 19:02:05.774569   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:02:05.774577   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:02:05.774640   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:02:05.817870   47919 cri.go:89] found id: ""
	I0229 19:02:05.817900   47919 logs.go:276] 0 containers: []
	W0229 19:02:05.817912   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:02:05.817919   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:02:05.817998   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:02:05.859533   47919 cri.go:89] found id: ""
	I0229 19:02:05.859565   47919 logs.go:276] 0 containers: []
	W0229 19:02:05.859579   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:02:05.859587   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:02:05.859646   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:02:05.904971   47919 cri.go:89] found id: ""
	I0229 19:02:05.905003   47919 logs.go:276] 0 containers: []
	W0229 19:02:05.905014   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:02:05.905021   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:02:05.905086   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:02:05.950431   47919 cri.go:89] found id: ""
	I0229 19:02:05.950459   47919 logs.go:276] 0 containers: []
	W0229 19:02:05.950470   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:02:05.950478   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:02:05.950546   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:02:05.999464   47919 cri.go:89] found id: ""
	I0229 19:02:05.999489   47919 logs.go:276] 0 containers: []
	W0229 19:02:05.999500   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:02:05.999508   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:02:05.999588   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:02:06.045086   47919 cri.go:89] found id: ""
	I0229 19:02:06.045117   47919 logs.go:276] 0 containers: []
	W0229 19:02:06.045133   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:02:06.045140   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:02:06.045203   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:02:06.091542   47919 cri.go:89] found id: ""
	I0229 19:02:06.091571   47919 logs.go:276] 0 containers: []
	W0229 19:02:06.091583   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:02:06.091592   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:02:06.091607   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:02:06.156524   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:02:06.156558   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:02:06.174941   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:02:06.174965   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:02:06.260443   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:02:06.260467   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:02:06.260483   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:02:06.377415   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:02:06.377457   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:02:08.931407   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:02:08.946035   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:02:08.946108   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:02:08.989299   47919 cri.go:89] found id: ""
	I0229 19:02:08.989326   47919 logs.go:276] 0 containers: []
	W0229 19:02:08.989338   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:02:08.989345   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:02:08.989405   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:02:09.033634   47919 cri.go:89] found id: ""
	I0229 19:02:09.033664   47919 logs.go:276] 0 containers: []
	W0229 19:02:09.033677   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:02:09.033684   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:02:09.033745   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:02:09.084381   47919 cri.go:89] found id: ""
	I0229 19:02:09.084406   47919 logs.go:276] 0 containers: []
	W0229 19:02:09.084435   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:02:09.084442   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:02:09.084507   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:02:09.132526   47919 cri.go:89] found id: ""
	I0229 19:02:09.132555   47919 logs.go:276] 0 containers: []
	W0229 19:02:09.132573   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:02:09.132581   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:02:09.132644   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:02:09.182655   47919 cri.go:89] found id: ""
	I0229 19:02:09.182684   47919 logs.go:276] 0 containers: []
	W0229 19:02:09.182694   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:02:09.182701   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:02:09.182764   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:02:09.223164   47919 cri.go:89] found id: ""
	I0229 19:02:09.223191   47919 logs.go:276] 0 containers: []
	W0229 19:02:09.223202   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:02:09.223210   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:02:09.223267   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:02:09.271882   47919 cri.go:89] found id: ""
	I0229 19:02:09.271908   47919 logs.go:276] 0 containers: []
	W0229 19:02:09.271926   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:02:09.271934   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:02:09.271992   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:02:09.331796   47919 cri.go:89] found id: ""
	I0229 19:02:09.331826   47919 logs.go:276] 0 containers: []
	W0229 19:02:09.331837   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:02:09.331847   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:02:09.331860   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:02:09.398969   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:02:09.399009   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:02:09.418992   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:02:09.419040   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:02:09.503358   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:02:09.503381   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:02:09.503394   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:02:09.612549   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:02:09.612586   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:02:10.184741   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:12.685204   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:10.024230   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:12.513343   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:12.162138   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:02:12.175827   47919 kubeadm.go:640] restartCluster took 4m14.562960798s
	W0229 19:02:12.175902   47919 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0229 19:02:12.175940   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0229 19:02:12.639231   47919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 19:02:12.658353   47919 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 19:02:12.671552   47919 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 19:02:12.684278   47919 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 19:02:12.684323   47919 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 19:02:12.903644   47919 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 19:02:15.184189   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:17.184275   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:14.517015   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:17.015195   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:19.184474   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:21.184737   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:19.513735   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:22.016650   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:23.185852   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:25.685744   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:24.516493   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:26.519091   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:29.013740   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:28.184960   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:30.685098   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:31.013808   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:33.514912   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:37.055439   47608 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.47828283s)
	I0229 19:02:37.055501   47608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 19:02:37.077296   47608 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 19:02:37.089984   47608 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 19:02:37.100332   47608 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 19:02:37.100379   47608 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0229 19:02:37.156153   47608 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0229 19:02:37.156243   47608 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 19:02:37.317040   47608 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 19:02:37.317142   47608 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 19:02:37.317220   47608 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 19:02:37.551800   47608 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 19:02:33.184422   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:35.686104   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:37.553918   47608 out.go:204]   - Generating certificates and keys ...
	I0229 19:02:37.554019   47608 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 19:02:37.554099   47608 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 19:02:37.554197   47608 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 19:02:37.554271   47608 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 19:02:37.554545   47608 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 19:02:37.555258   47608 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 19:02:37.555792   47608 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 19:02:37.556150   47608 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 19:02:37.556697   47608 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 19:02:37.557215   47608 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 19:02:37.557744   47608 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 19:02:37.557835   47608 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 19:02:37.725663   47608 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 19:02:37.801114   47608 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 19:02:37.971825   47608 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 19:02:38.081281   47608 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 19:02:38.081986   47608 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 19:02:38.086435   47608 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 19:02:36.013356   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:38.014838   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:38.088264   47608 out.go:204]   - Booting up control plane ...
	I0229 19:02:38.088353   47608 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 19:02:38.088442   47608 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 19:02:38.088533   47608 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 19:02:38.106686   47608 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 19:02:38.107606   47608 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 19:02:38.107671   47608 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0229 19:02:38.264387   47608 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 19:02:38.185682   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:40.684963   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:40.014933   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:42.016282   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:44.768315   47608 kubeadm.go:322] [apiclient] All control plane components are healthy after 6.503831 seconds
	I0229 19:02:44.768482   47608 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0229 19:02:44.786115   47608 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0229 19:02:45.321509   47608 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0229 19:02:45.321785   47608 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-991128 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0229 19:02:45.834905   47608 kubeadm.go:322] [bootstrap-token] Using token: 53x4pg.x71etkalcz6sdqmq
	I0229 19:02:45.836192   47608 out.go:204]   - Configuring RBAC rules ...
	I0229 19:02:45.836319   47608 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0229 19:02:45.843486   47608 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0229 19:02:45.854690   47608 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0229 19:02:45.866571   47608 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0229 19:02:45.870812   47608 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0229 19:02:45.874413   47608 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0229 19:02:45.891377   47608 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0229 19:02:46.190541   47608 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0229 19:02:46.251452   47608 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0229 19:02:46.254418   47608 kubeadm.go:322] 
	I0229 19:02:46.254529   47608 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0229 19:02:46.254552   47608 kubeadm.go:322] 
	I0229 19:02:46.254653   47608 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0229 19:02:46.254663   47608 kubeadm.go:322] 
	I0229 19:02:46.254693   47608 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0229 19:02:46.254777   47608 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0229 19:02:46.254843   47608 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0229 19:02:46.254856   47608 kubeadm.go:322] 
	I0229 19:02:46.254932   47608 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0229 19:02:46.254949   47608 kubeadm.go:322] 
	I0229 19:02:46.255010   47608 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0229 19:02:46.255035   47608 kubeadm.go:322] 
	I0229 19:02:46.255115   47608 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0229 19:02:46.255219   47608 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0229 19:02:46.255288   47608 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0229 19:02:46.255298   47608 kubeadm.go:322] 
	I0229 19:02:46.255366   47608 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0229 19:02:46.255456   47608 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0229 19:02:46.255469   47608 kubeadm.go:322] 
	I0229 19:02:46.255574   47608 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 53x4pg.x71etkalcz6sdqmq \
	I0229 19:02:46.255704   47608 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:aeecb2c9da879b7c90530698bac219eb9b0e3de8de59b6b65ba867f984e81782 \
	I0229 19:02:46.255726   47608 kubeadm.go:322] 	--control-plane 
	I0229 19:02:46.255730   47608 kubeadm.go:322] 
	I0229 19:02:46.255838   47608 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0229 19:02:46.255850   47608 kubeadm.go:322] 
	I0229 19:02:46.255951   47608 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 53x4pg.x71etkalcz6sdqmq \
	I0229 19:02:46.256097   47608 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:aeecb2c9da879b7c90530698bac219eb9b0e3de8de59b6b65ba867f984e81782 
	I0229 19:02:46.261669   47608 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 19:02:46.264240   47608 cni.go:84] Creating CNI manager for ""
	I0229 19:02:46.264255   47608 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 19:02:46.266874   47608 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 19:02:43.185008   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:45.685480   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:44.515334   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:47.014269   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:48.006787   48088 pod_ready.go:81] duration metric: took 4m0.000159724s waiting for pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace to be "Ready" ...
	E0229 19:02:48.006810   48088 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace to be "Ready" (will not retry!)
	I0229 19:02:48.006828   48088 pod_ready.go:38] duration metric: took 4m13.055720974s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 19:02:48.006852   48088 kubeadm.go:640] restartCluster took 4m30.764284147s
	W0229 19:02:48.006932   48088 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0229 19:02:48.006958   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0229 19:02:46.268155   47608 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 19:02:46.302630   47608 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 19:02:46.363238   47608 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 19:02:46.363314   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:46.363332   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f9e09574fdbf0719156a1e892f7aeb8b71f0cf19 minikube.k8s.io/name=embed-certs-991128 minikube.k8s.io/updated_at=2024_02_29T19_02_46_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:46.429324   47608 ops.go:34] apiserver oom_adj: -16
	I0229 19:02:46.736245   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:47.236707   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:47.736427   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:48.236379   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:48.736599   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:49.236640   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:49.736492   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:50.237145   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:48.184252   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:50.185542   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:52.683769   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:50.736510   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:51.236643   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:51.736840   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:52.236378   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:52.736992   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:53.236672   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:53.736958   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:54.236590   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:54.736323   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:55.237218   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:55.184845   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:57.685255   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:55.736774   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:56.236342   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:56.736380   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:57.236930   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:57.737100   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:58.237031   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:58.387963   47608 kubeadm.go:1088] duration metric: took 12.024710189s to wait for elevateKubeSystemPrivileges.
	I0229 19:02:58.388004   47608 kubeadm.go:406] StartCluster complete in 5m24.764885945s
	I0229 19:02:58.388027   47608 settings.go:142] acquiring lock: {Name:mk2120f70b8c0f8e9d58905a579415af500b3723 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:02:58.388120   47608 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18259-6428/kubeconfig
	I0229 19:02:58.390675   47608 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/kubeconfig: {Name:mk7125f243525b7f0feb85371d9c568ed8e0cf7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:02:58.390953   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 19:02:58.391045   47608 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 19:02:58.391123   47608 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-991128"
	I0229 19:02:58.391146   47608 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-991128"
	W0229 19:02:58.391154   47608 addons.go:243] addon storage-provisioner should already be in state true
	I0229 19:02:58.391154   47608 config.go:182] Loaded profile config "embed-certs-991128": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 19:02:58.391203   47608 host.go:66] Checking if "embed-certs-991128" exists ...
	I0229 19:02:58.391204   47608 addons.go:69] Setting default-storageclass=true in profile "embed-certs-991128"
	I0229 19:02:58.391244   47608 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-991128"
	I0229 19:02:58.391596   47608 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:02:58.391624   47608 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:02:58.391698   47608 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:02:58.391718   47608 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:02:58.391204   47608 addons.go:69] Setting metrics-server=true in profile "embed-certs-991128"
	I0229 19:02:58.391948   47608 addons.go:234] Setting addon metrics-server=true in "embed-certs-991128"
	W0229 19:02:58.391957   47608 addons.go:243] addon metrics-server should already be in state true
	I0229 19:02:58.391993   47608 host.go:66] Checking if "embed-certs-991128" exists ...
	I0229 19:02:58.392356   47608 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:02:58.392387   47608 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:02:58.409953   47608 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40911
	I0229 19:02:58.409972   47608 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34647
	I0229 19:02:58.410460   47608 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:02:58.410478   47608 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:02:58.411005   47608 main.go:141] libmachine: Using API Version  1
	I0229 19:02:58.411018   47608 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:02:58.411018   47608 main.go:141] libmachine: Using API Version  1
	I0229 19:02:58.411048   47608 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:02:58.411360   47608 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41753
	I0229 19:02:58.411529   47608 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:02:58.411534   47608 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:02:58.411740   47608 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:02:58.411752   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetState
	I0229 19:02:58.412075   47608 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:02:58.412114   47608 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:02:58.412144   47608 main.go:141] libmachine: Using API Version  1
	I0229 19:02:58.412164   47608 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:02:58.412662   47608 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:02:58.413148   47608 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:02:58.413178   47608 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:02:58.415173   47608 addons.go:234] Setting addon default-storageclass=true in "embed-certs-991128"
	W0229 19:02:58.415195   47608 addons.go:243] addon default-storageclass should already be in state true
	I0229 19:02:58.415222   47608 host.go:66] Checking if "embed-certs-991128" exists ...
	I0229 19:02:58.415608   47608 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:02:58.415638   47608 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:02:58.429891   47608 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42057
	I0229 19:02:58.430108   47608 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37441
	I0229 19:02:58.430343   47608 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:02:58.430782   47608 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:02:58.431278   47608 main.go:141] libmachine: Using API Version  1
	I0229 19:02:58.431299   47608 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:02:58.431355   47608 main.go:141] libmachine: Using API Version  1
	I0229 19:02:58.431369   47608 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:02:58.431662   47608 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:02:58.431720   47608 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:02:58.432048   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetState
	I0229 19:02:58.432471   47608 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:02:58.432497   47608 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:02:58.432570   47608 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46057
	I0229 19:02:58.432926   47608 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:02:58.433593   47608 main.go:141] libmachine: Using API Version  1
	I0229 19:02:58.433611   47608 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:02:58.433700   47608 main.go:141] libmachine: (embed-certs-991128) Calling .DriverName
	I0229 19:02:58.436201   47608 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0229 19:02:58.434375   47608 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:02:58.437531   47608 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0229 19:02:58.437549   47608 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0229 19:02:58.437568   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHHostname
	I0229 19:02:58.436414   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetState
	I0229 19:02:58.440191   47608 main.go:141] libmachine: (embed-certs-991128) Calling .DriverName
	I0229 19:02:58.441799   47608 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 19:02:58.440820   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 19:02:58.441382   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHPort
	I0229 19:02:58.443189   47608 main.go:141] libmachine: (embed-certs-991128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:76:e2", ip: ""} in network mk-embed-certs-991128: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:29 +0000 UTC Type:0 Mac:52:54:00:44:76:e2 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:embed-certs-991128 Clientid:01:52:54:00:44:76:e2}
	I0229 19:02:58.443204   47608 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 19:02:58.443216   47608 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 19:02:58.443228   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHHostname
	I0229 19:02:58.443226   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined IP address 192.168.61.34 and MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 19:02:58.443288   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHKeyPath
	I0229 19:02:58.443399   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHUsername
	I0229 19:02:58.443538   47608 sshutil.go:53] new ssh client: &{IP:192.168.61.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/embed-certs-991128/id_rsa Username:docker}
	I0229 19:02:58.446253   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 19:02:58.446573   47608 main.go:141] libmachine: (embed-certs-991128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:76:e2", ip: ""} in network mk-embed-certs-991128: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:29 +0000 UTC Type:0 Mac:52:54:00:44:76:e2 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:embed-certs-991128 Clientid:01:52:54:00:44:76:e2}
	I0229 19:02:58.446840   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined IP address 192.168.61.34 and MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 19:02:58.446885   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHPort
	I0229 19:02:58.447103   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHKeyPath
	I0229 19:02:58.447250   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHUsername
	I0229 19:02:58.447399   47608 sshutil.go:53] new ssh client: &{IP:192.168.61.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/embed-certs-991128/id_rsa Username:docker}
	I0229 19:02:58.449854   47608 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41629
	I0229 19:02:58.450308   47608 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:02:58.450842   47608 main.go:141] libmachine: Using API Version  1
	I0229 19:02:58.450862   47608 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:02:58.451215   47608 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:02:58.452123   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetState
	I0229 19:02:58.453574   47608 main.go:141] libmachine: (embed-certs-991128) Calling .DriverName
	I0229 19:02:58.453805   47608 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 19:02:58.453822   47608 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 19:02:58.453836   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHHostname
	I0229 19:02:58.456718   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 19:02:58.457141   47608 main.go:141] libmachine: (embed-certs-991128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:76:e2", ip: ""} in network mk-embed-certs-991128: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:29 +0000 UTC Type:0 Mac:52:54:00:44:76:e2 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:embed-certs-991128 Clientid:01:52:54:00:44:76:e2}
	I0229 19:02:58.457198   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined IP address 192.168.61.34 and MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 19:02:58.457301   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHPort
	I0229 19:02:58.457891   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHKeyPath
	I0229 19:02:58.458055   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHUsername
	I0229 19:02:58.458208   47608 sshutil.go:53] new ssh client: &{IP:192.168.61.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/embed-certs-991128/id_rsa Username:docker}
	I0229 19:02:58.622646   47608 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 19:02:58.666581   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0229 19:02:58.680294   47608 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0229 19:02:58.680319   47608 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0229 19:02:58.701182   47608 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 19:02:58.826426   47608 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0229 19:02:58.826454   47608 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0229 19:02:58.896074   47608 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-991128" context rescaled to 1 replicas
	I0229 19:02:58.896112   47608 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.34 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0229 19:02:58.897987   47608 out.go:177] * Verifying Kubernetes components...
	I0229 19:02:58.899307   47608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 19:02:58.943695   47608 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 19:02:58.943719   47608 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0229 19:02:59.111473   47608 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 19:03:00.514730   47608 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.892048484s)
	I0229 19:03:00.514786   47608 main.go:141] libmachine: Making call to close driver server
	I0229 19:03:00.514797   47608 main.go:141] libmachine: (embed-certs-991128) Calling .Close
	I0229 19:03:00.515119   47608 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:03:00.515140   47608 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:03:00.515155   47608 main.go:141] libmachine: Making call to close driver server
	I0229 19:03:00.515151   47608 main.go:141] libmachine: (embed-certs-991128) DBG | Closing plugin on server side
	I0229 19:03:00.515163   47608 main.go:141] libmachine: (embed-certs-991128) Calling .Close
	I0229 19:03:00.515407   47608 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:03:00.515422   47608 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:03:00.525724   47608 main.go:141] libmachine: Making call to close driver server
	I0229 19:03:00.525747   47608 main.go:141] libmachine: (embed-certs-991128) Calling .Close
	I0229 19:03:00.526016   47608 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:03:00.526034   47608 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:03:00.526058   47608 main.go:141] libmachine: (embed-certs-991128) DBG | Closing plugin on server side
	I0229 19:03:00.549463   47608 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.882844212s)
	I0229 19:03:00.549496   47608 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0229 19:03:01.032296   47608 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.331073482s)
	I0229 19:03:01.032299   47608 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.132962021s)
	I0229 19:03:01.032378   47608 node_ready.go:35] waiting up to 6m0s for node "embed-certs-991128" to be "Ready" ...
	I0229 19:03:01.032351   47608 main.go:141] libmachine: Making call to close driver server
	I0229 19:03:01.032449   47608 main.go:141] libmachine: (embed-certs-991128) Calling .Close
	I0229 19:03:01.032776   47608 main.go:141] libmachine: (embed-certs-991128) DBG | Closing plugin on server side
	I0229 19:03:01.032863   47608 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:03:01.032884   47608 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:03:01.032912   47608 main.go:141] libmachine: Making call to close driver server
	I0229 19:03:01.032929   47608 main.go:141] libmachine: (embed-certs-991128) Calling .Close
	I0229 19:03:01.033250   47608 main.go:141] libmachine: (embed-certs-991128) DBG | Closing plugin on server side
	I0229 19:03:01.033294   47608 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:03:01.033313   47608 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:03:01.054533   47608 node_ready.go:49] node "embed-certs-991128" has status "Ready":"True"
	I0229 19:03:01.054561   47608 node_ready.go:38] duration metric: took 22.162376ms waiting for node "embed-certs-991128" to be "Ready" ...
	I0229 19:03:01.054574   47608 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 19:03:01.073737   47608 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.962221621s)
	I0229 19:03:01.073792   47608 main.go:141] libmachine: Making call to close driver server
	I0229 19:03:01.073807   47608 main.go:141] libmachine: (embed-certs-991128) Calling .Close
	I0229 19:03:01.074112   47608 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:03:01.074134   47608 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:03:01.074144   47608 main.go:141] libmachine: Making call to close driver server
	I0229 19:03:01.074152   47608 main.go:141] libmachine: (embed-certs-991128) Calling .Close
	I0229 19:03:01.074378   47608 main.go:141] libmachine: (embed-certs-991128) DBG | Closing plugin on server side
	I0229 19:03:01.074414   47608 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:03:01.074423   47608 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:03:01.074438   47608 addons.go:470] Verifying addon metrics-server=true in "embed-certs-991128"
	I0229 19:03:01.076668   47608 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0229 19:03:00.186003   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:02.684214   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:01.077896   47608 addons.go:505] enable addons completed in 2.686848059s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0229 19:03:01.090039   47608 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-nth8z" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:01.101161   47608 pod_ready.go:92] pod "coredns-5dd5756b68-nth8z" in "kube-system" namespace has status "Ready":"True"
	I0229 19:03:01.101188   47608 pod_ready.go:81] duration metric: took 11.117889ms waiting for pod "coredns-5dd5756b68-nth8z" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:01.101200   47608 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-991128" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:01.106035   47608 pod_ready.go:92] pod "etcd-embed-certs-991128" in "kube-system" namespace has status "Ready":"True"
	I0229 19:03:01.106059   47608 pod_ready.go:81] duration metric: took 4.853039ms waiting for pod "etcd-embed-certs-991128" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:01.106069   47608 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-991128" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:01.112716   47608 pod_ready.go:92] pod "kube-apiserver-embed-certs-991128" in "kube-system" namespace has status "Ready":"True"
	I0229 19:03:01.112741   47608 pod_ready.go:81] duration metric: took 6.663364ms waiting for pod "kube-apiserver-embed-certs-991128" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:01.112753   47608 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-991128" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:01.117682   47608 pod_ready.go:92] pod "kube-controller-manager-embed-certs-991128" in "kube-system" namespace has status "Ready":"True"
	I0229 19:03:01.117712   47608 pod_ready.go:81] duration metric: took 4.950508ms waiting for pod "kube-controller-manager-embed-certs-991128" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:01.117723   47608 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5grst" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:01.449759   47608 pod_ready.go:92] pod "kube-proxy-5grst" in "kube-system" namespace has status "Ready":"True"
	I0229 19:03:01.449780   47608 pod_ready.go:81] duration metric: took 332.0508ms waiting for pod "kube-proxy-5grst" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:01.449789   47608 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-991128" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:01.837609   47608 pod_ready.go:92] pod "kube-scheduler-embed-certs-991128" in "kube-system" namespace has status "Ready":"True"
	I0229 19:03:01.837633   47608 pod_ready.go:81] duration metric: took 387.837788ms waiting for pod "kube-scheduler-embed-certs-991128" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:01.837641   47608 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:03.844755   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:05.183456   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:07.184892   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:05.844890   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:07.845609   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:09.185625   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:11.683928   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:10.345767   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:12.346373   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:14.844773   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:13.684321   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:16.184064   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:16.845609   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:19.346873   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:18.185564   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:20.685386   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:20.199795   48088 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.19281949s)
	I0229 19:03:20.199858   48088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 19:03:20.217490   48088 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 19:03:20.230760   48088 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 19:03:20.243524   48088 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 19:03:20.243561   48088 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0229 19:03:20.456117   48088 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 19:03:21.845081   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:23.845701   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:23.184306   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:25.185094   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:25.677354   47515 pod_ready.go:81] duration metric: took 4m0.000327645s waiting for pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace to be "Ready" ...
	E0229 19:03:25.677385   47515 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace to be "Ready" (will not retry!)
	I0229 19:03:25.677415   47515 pod_ready.go:38] duration metric: took 4m14.05174509s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 19:03:25.677440   47515 kubeadm.go:640] restartCluster took 4m31.88709285s
	W0229 19:03:25.677495   47515 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0229 19:03:25.677520   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0229 19:03:29.090699   48088 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0229 19:03:29.090795   48088 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 19:03:29.090912   48088 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 19:03:29.091058   48088 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 19:03:29.091185   48088 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 19:03:29.091273   48088 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 19:03:29.092712   48088 out.go:204]   - Generating certificates and keys ...
	I0229 19:03:29.092825   48088 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 19:03:29.092914   48088 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 19:03:29.093021   48088 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 19:03:29.093110   48088 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 19:03:29.093199   48088 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 19:03:29.093273   48088 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 19:03:29.093353   48088 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 19:03:29.093430   48088 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 19:03:29.093523   48088 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 19:03:29.093617   48088 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 19:03:29.093668   48088 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 19:03:29.093741   48088 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 19:03:29.093811   48088 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 19:03:29.093880   48088 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 19:03:29.093962   48088 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 19:03:29.094031   48088 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 19:03:29.094133   48088 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 19:03:29.094211   48088 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 19:03:29.095825   48088 out.go:204]   - Booting up control plane ...
	I0229 19:03:29.095939   48088 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 19:03:29.096048   48088 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 19:03:29.096154   48088 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 19:03:29.096322   48088 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 19:03:29.096423   48088 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 19:03:29.096489   48088 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0229 19:03:29.096694   48088 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 19:03:29.096769   48088 kubeadm.go:322] [apiclient] All control plane components are healthy after 6.003591 seconds
	I0229 19:03:29.096853   48088 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0229 19:03:29.096951   48088 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0229 19:03:29.097006   48088 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0229 19:03:29.097158   48088 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-153528 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0229 19:03:29.097202   48088 kubeadm.go:322] [bootstrap-token] Using token: 1l0lv4.q8mu3aeamo8e3253
	I0229 19:03:29.098693   48088 out.go:204]   - Configuring RBAC rules ...
	I0229 19:03:29.098829   48088 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0229 19:03:29.098945   48088 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0229 19:03:29.099166   48088 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0229 19:03:29.099357   48088 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0229 19:03:29.099502   48088 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0229 19:03:29.099613   48088 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0229 19:03:29.099756   48088 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0229 19:03:29.099816   48088 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0229 19:03:29.099874   48088 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0229 19:03:29.099884   48088 kubeadm.go:322] 
	I0229 19:03:29.099961   48088 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0229 19:03:29.099970   48088 kubeadm.go:322] 
	I0229 19:03:29.100060   48088 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0229 19:03:29.100070   48088 kubeadm.go:322] 
	I0229 19:03:29.100100   48088 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0229 19:03:29.100173   48088 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0229 19:03:29.100239   48088 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0229 19:03:29.100252   48088 kubeadm.go:322] 
	I0229 19:03:29.100319   48088 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0229 19:03:29.100329   48088 kubeadm.go:322] 
	I0229 19:03:29.100388   48088 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0229 19:03:29.100398   48088 kubeadm.go:322] 
	I0229 19:03:29.100463   48088 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0229 19:03:29.100559   48088 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0229 19:03:29.100651   48088 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0229 19:03:29.100661   48088 kubeadm.go:322] 
	I0229 19:03:29.100763   48088 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0229 19:03:29.100862   48088 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0229 19:03:29.100877   48088 kubeadm.go:322] 
	I0229 19:03:29.100984   48088 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token 1l0lv4.q8mu3aeamo8e3253 \
	I0229 19:03:29.101114   48088 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:aeecb2c9da879b7c90530698bac219eb9b0e3de8de59b6b65ba867f984e81782 \
	I0229 19:03:29.101143   48088 kubeadm.go:322] 	--control-plane 
	I0229 19:03:29.101152   48088 kubeadm.go:322] 
	I0229 19:03:29.101249   48088 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0229 19:03:29.101258   48088 kubeadm.go:322] 
	I0229 19:03:29.101351   48088 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token 1l0lv4.q8mu3aeamo8e3253 \
	I0229 19:03:29.101473   48088 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:aeecb2c9da879b7c90530698bac219eb9b0e3de8de59b6b65ba867f984e81782 
	I0229 19:03:29.101488   48088 cni.go:84] Creating CNI manager for ""
	I0229 19:03:29.101497   48088 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 19:03:29.103073   48088 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 19:03:29.104219   48088 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 19:03:29.170952   48088 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 19:03:29.239084   48088 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 19:03:29.239154   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:29.239173   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f9e09574fdbf0719156a1e892f7aeb8b71f0cf19 minikube.k8s.io/name=default-k8s-diff-port-153528 minikube.k8s.io/updated_at=2024_02_29T19_03_29_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:25.847505   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:28.346494   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:29.423784   48088 ops.go:34] apiserver oom_adj: -16
	I0229 19:03:29.641150   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:30.141394   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:30.641982   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:31.141220   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:31.642229   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:32.141232   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:32.641372   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:33.141757   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:33.641285   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:34.141462   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:30.346615   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:32.844207   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:34.846669   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:34.641857   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:35.142068   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:35.641289   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:36.142146   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:36.641965   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:37.141335   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:37.641778   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:38.141415   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:38.641267   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:39.141162   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:36.846708   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:39.347339   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:39.642154   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:40.141271   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:40.641433   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:41.141522   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:41.641353   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:41.787617   48088 kubeadm.go:1088] duration metric: took 12.548525295s to wait for elevateKubeSystemPrivileges.
	I0229 19:03:41.787657   48088 kubeadm.go:406] StartCluster complete in 5m24.60631624s
	I0229 19:03:41.787678   48088 settings.go:142] acquiring lock: {Name:mk2120f70b8c0f8e9d58905a579415af500b3723 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:03:41.787771   48088 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18259-6428/kubeconfig
	I0229 19:03:41.789341   48088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/kubeconfig: {Name:mk7125f243525b7f0feb85371d9c568ed8e0cf7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:03:41.789617   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 19:03:41.789716   48088 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 19:03:41.789815   48088 config.go:182] Loaded profile config "default-k8s-diff-port-153528": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 19:03:41.789835   48088 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-153528"
	I0229 19:03:41.789835   48088 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-153528"
	I0229 19:03:41.789856   48088 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-153528"
	I0229 19:03:41.789821   48088 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-153528"
	I0229 19:03:41.789879   48088 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-153528"
	W0229 19:03:41.789890   48088 addons.go:243] addon storage-provisioner should already be in state true
	I0229 19:03:41.789937   48088 host.go:66] Checking if "default-k8s-diff-port-153528" exists ...
	I0229 19:03:41.789861   48088 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-153528"
	W0229 19:03:41.789963   48088 addons.go:243] addon metrics-server should already be in state true
	I0229 19:03:41.790008   48088 host.go:66] Checking if "default-k8s-diff-port-153528" exists ...
	I0229 19:03:41.790304   48088 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:03:41.790312   48088 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:03:41.790332   48088 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:03:41.790338   48088 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:03:41.790374   48088 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:03:41.790417   48088 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:03:41.806924   48088 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36439
	I0229 19:03:41.807115   48088 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38789
	I0229 19:03:41.807481   48088 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:03:41.807671   48088 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:03:41.808017   48088 main.go:141] libmachine: Using API Version  1
	I0229 19:03:41.808036   48088 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:03:41.808178   48088 main.go:141] libmachine: Using API Version  1
	I0229 19:03:41.808194   48088 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:03:41.808251   48088 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45501
	I0229 19:03:41.808377   48088 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:03:41.808613   48088 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:03:41.808953   48088 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:03:41.808999   48088 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:03:41.809113   48088 main.go:141] libmachine: Using API Version  1
	I0229 19:03:41.809136   48088 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:03:41.809418   48088 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:03:41.809604   48088 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:03:41.809789   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetState
	I0229 19:03:41.810683   48088 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:03:41.810718   48088 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:03:41.813030   48088 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-153528"
	W0229 19:03:41.813045   48088 addons.go:243] addon default-storageclass should already be in state true
	I0229 19:03:41.813066   48088 host.go:66] Checking if "default-k8s-diff-port-153528" exists ...
	I0229 19:03:41.813309   48088 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:03:41.813321   48088 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:03:41.824373   48088 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33803
	I0229 19:03:41.824768   48088 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:03:41.825263   48088 main.go:141] libmachine: Using API Version  1
	I0229 19:03:41.825280   48088 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:03:41.825557   48088 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:03:41.825699   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetState
	I0229 19:03:41.827334   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .DriverName
	I0229 19:03:41.828844   48088 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0229 19:03:41.829931   48088 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0229 19:03:41.829943   48088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0229 19:03:41.829968   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHHostname
	I0229 19:03:41.833079   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 19:03:41.833090   48088 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37709
	I0229 19:03:41.833451   48088 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:03:41.833516   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ec:2b", ip: ""} in network mk-default-k8s-diff-port-153528: {Iface:virbr3 ExpiryTime:2024-02-29 19:58:02 +0000 UTC Type:0 Mac:52:54:00:78:ec:2b Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:default-k8s-diff-port-153528 Clientid:01:52:54:00:78:ec:2b}
	I0229 19:03:41.833527   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined IP address 192.168.39.210 and MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 19:03:41.833694   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHPort
	I0229 19:03:41.833895   48088 main.go:141] libmachine: Using API Version  1
	I0229 19:03:41.833913   48088 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37833
	I0229 19:03:41.833917   48088 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:03:41.833982   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHKeyPath
	I0229 19:03:41.834140   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHUsername
	I0229 19:03:41.834272   48088 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/default-k8s-diff-port-153528/id_rsa Username:docker}
	I0229 19:03:41.834795   48088 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:03:41.835272   48088 main.go:141] libmachine: Using API Version  1
	I0229 19:03:41.835293   48088 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:03:41.835298   48088 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:03:41.835675   48088 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:03:41.835791   48088 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:03:41.835798   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetState
	I0229 19:03:41.835827   48088 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:03:41.837394   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .DriverName
	I0229 19:03:41.839349   48088 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 19:03:41.840971   48088 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 19:03:41.840992   48088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 19:03:41.841008   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHHostname
	I0229 19:03:41.844091   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 19:03:41.844475   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ec:2b", ip: ""} in network mk-default-k8s-diff-port-153528: {Iface:virbr3 ExpiryTime:2024-02-29 19:58:02 +0000 UTC Type:0 Mac:52:54:00:78:ec:2b Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:default-k8s-diff-port-153528 Clientid:01:52:54:00:78:ec:2b}
	I0229 19:03:41.844505   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined IP address 192.168.39.210 and MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 19:03:41.844735   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHPort
	I0229 19:03:41.844954   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHKeyPath
	I0229 19:03:41.845143   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHUsername
	I0229 19:03:41.845300   48088 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/default-k8s-diff-port-153528/id_rsa Username:docker}
	I0229 19:03:41.853524   48088 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45631
	I0229 19:03:41.855329   48088 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:03:41.855788   48088 main.go:141] libmachine: Using API Version  1
	I0229 19:03:41.855809   48088 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:03:41.856135   48088 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:03:41.856317   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetState
	I0229 19:03:41.857882   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .DriverName
	I0229 19:03:41.858179   48088 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 19:03:41.858193   48088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 19:03:41.858214   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHHostname
	I0229 19:03:41.861292   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 19:03:41.861640   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ec:2b", ip: ""} in network mk-default-k8s-diff-port-153528: {Iface:virbr3 ExpiryTime:2024-02-29 19:58:02 +0000 UTC Type:0 Mac:52:54:00:78:ec:2b Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:default-k8s-diff-port-153528 Clientid:01:52:54:00:78:ec:2b}
	I0229 19:03:41.861664   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined IP address 192.168.39.210 and MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 19:03:41.861899   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHPort
	I0229 19:03:41.862088   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHKeyPath
	I0229 19:03:41.862241   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHUsername
	I0229 19:03:41.862413   48088 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/default-k8s-diff-port-153528/id_rsa Username:docker}
	I0229 19:03:42.162741   48088 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0229 19:03:42.162760   48088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0229 19:03:42.164559   48088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 19:03:42.185784   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0229 19:03:42.225413   48088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 19:03:42.283759   48088 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0229 19:03:42.283792   48088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0229 19:03:42.296879   48088 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-153528" context rescaled to 1 replicas
	I0229 19:03:42.296912   48088 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.210 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0229 19:03:42.298687   48088 out.go:177] * Verifying Kubernetes components...
	I0229 19:03:42.300011   48088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 19:03:42.478347   48088 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 19:03:42.478370   48088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0229 19:03:42.626185   48088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 19:03:44.654846   48088 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.469026575s)
	I0229 19:03:44.654876   48088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.429431888s)
	I0229 19:03:44.654891   48088 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0229 19:03:44.654927   48088 main.go:141] libmachine: Making call to close driver server
	I0229 19:03:44.654937   48088 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.354896537s)
	I0229 19:03:44.654987   48088 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-153528" to be "Ready" ...
	I0229 19:03:44.654942   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .Close
	I0229 19:03:44.655090   48088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.490505268s)
	I0229 19:03:44.655115   48088 main.go:141] libmachine: Making call to close driver server
	I0229 19:03:44.655125   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .Close
	I0229 19:03:44.655326   48088 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:03:44.655344   48088 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:03:44.655346   48088 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:03:44.655345   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | Closing plugin on server side
	I0229 19:03:44.655354   48088 main.go:141] libmachine: Making call to close driver server
	I0229 19:03:44.655357   48088 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:03:44.655363   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .Close
	I0229 19:03:44.655370   48088 main.go:141] libmachine: Making call to close driver server
	I0229 19:03:44.655379   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .Close
	I0229 19:03:44.655562   48088 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:03:44.655604   48088 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:03:44.655579   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | Closing plugin on server side
	I0229 19:03:44.655662   48088 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:03:44.655821   48088 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:03:44.655659   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | Closing plugin on server side
	I0229 19:03:44.659331   48088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.033110492s)
	I0229 19:03:44.659381   48088 main.go:141] libmachine: Making call to close driver server
	I0229 19:03:44.659393   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .Close
	I0229 19:03:44.659652   48088 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:03:44.659667   48088 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:03:44.659675   48088 main.go:141] libmachine: Making call to close driver server
	I0229 19:03:44.659683   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .Close
	I0229 19:03:44.659685   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | Closing plugin on server side
	I0229 19:03:44.659902   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | Closing plugin on server side
	I0229 19:03:44.659939   48088 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:03:44.659950   48088 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:03:44.659960   48088 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-153528"
	I0229 19:03:44.683397   48088 node_ready.go:49] node "default-k8s-diff-port-153528" has status "Ready":"True"
	I0229 19:03:44.683417   48088 node_ready.go:38] duration metric: took 28.415374ms waiting for node "default-k8s-diff-port-153528" to be "Ready" ...
	I0229 19:03:44.683427   48088 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 19:03:44.685811   48088 main.go:141] libmachine: Making call to close driver server
	I0229 19:03:44.685831   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .Close
	I0229 19:03:44.686088   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | Closing plugin on server side
	I0229 19:03:44.686110   48088 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:03:44.686122   48088 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:03:44.687970   48088 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0229 19:03:41.849469   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:44.345593   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:44.689232   48088 addons.go:505] enable addons completed in 2.899518009s: enabled=[storage-provisioner metrics-server default-storageclass]
	I0229 19:03:44.693381   48088 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-cgvkv" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:44.720914   48088 pod_ready.go:92] pod "coredns-5dd5756b68-cgvkv" in "kube-system" namespace has status "Ready":"True"
	I0229 19:03:44.720942   48088 pod_ready.go:81] duration metric: took 27.53714ms waiting for pod "coredns-5dd5756b68-cgvkv" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:44.720954   48088 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-fmptg" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:44.729596   48088 pod_ready.go:92] pod "coredns-5dd5756b68-fmptg" in "kube-system" namespace has status "Ready":"True"
	I0229 19:03:44.729618   48088 pod_ready.go:81] duration metric: took 8.655818ms waiting for pod "coredns-5dd5756b68-fmptg" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:44.729628   48088 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-153528" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:44.734112   48088 pod_ready.go:92] pod "etcd-default-k8s-diff-port-153528" in "kube-system" namespace has status "Ready":"True"
	I0229 19:03:44.734130   48088 pod_ready.go:81] duration metric: took 4.494255ms waiting for pod "etcd-default-k8s-diff-port-153528" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:44.734137   48088 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-153528" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:44.738843   48088 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-153528" in "kube-system" namespace has status "Ready":"True"
	I0229 19:03:44.738860   48088 pod_ready.go:81] duration metric: took 4.717537ms waiting for pod "kube-apiserver-default-k8s-diff-port-153528" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:44.738868   48088 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-153528" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:45.059153   48088 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-153528" in "kube-system" namespace has status "Ready":"True"
	I0229 19:03:45.059174   48088 pod_ready.go:81] duration metric: took 320.300485ms waiting for pod "kube-controller-manager-default-k8s-diff-port-153528" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:45.059183   48088 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bvrxx" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:45.465590   48088 pod_ready.go:92] pod "kube-proxy-bvrxx" in "kube-system" namespace has status "Ready":"True"
	I0229 19:03:45.465616   48088 pod_ready.go:81] duration metric: took 406.426237ms waiting for pod "kube-proxy-bvrxx" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:45.465630   48088 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-153528" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:45.858390   48088 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-153528" in "kube-system" namespace has status "Ready":"True"
	I0229 19:03:45.858413   48088 pod_ready.go:81] duration metric: took 392.775547ms waiting for pod "kube-scheduler-default-k8s-diff-port-153528" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:45.858426   48088 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:47.866057   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:46.848336   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:49.344899   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:49.866128   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:51.871764   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:51.346608   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:53.846506   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:58.394324   47515 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.716776929s)
	I0229 19:03:58.394415   47515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 19:03:58.411946   47515 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 19:03:58.422778   47515 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 19:03:58.432981   47515 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 19:03:58.433029   47515 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0229 19:03:58.497643   47515 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.2
	I0229 19:03:58.497784   47515 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 19:03:58.673058   47515 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 19:03:58.673181   47515 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 19:03:58.673291   47515 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 19:03:58.915681   47515 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 19:03:54.366316   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:56.866740   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:58.867746   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:58.917365   47515 out.go:204]   - Generating certificates and keys ...
	I0229 19:03:58.917468   47515 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 19:03:58.917556   47515 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 19:03:58.917657   47515 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 19:03:58.917758   47515 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 19:03:58.917857   47515 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 19:03:58.917933   47515 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 19:03:58.918117   47515 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 19:03:58.918699   47515 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 19:03:58.919679   47515 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 19:03:58.920578   47515 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 19:03:58.921424   47515 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 19:03:58.921738   47515 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 19:03:59.066887   47515 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 19:03:59.215266   47515 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0229 19:03:59.404270   47515 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 19:03:59.514467   47515 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 19:03:59.615483   47515 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 19:03:59.616256   47515 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 19:03:59.619177   47515 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 19:03:55.850264   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:58.346720   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:59.620798   47515 out.go:204]   - Booting up control plane ...
	I0229 19:03:59.620910   47515 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 19:03:59.621009   47515 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 19:03:59.621758   47515 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 19:03:59.648331   47515 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 19:03:59.649070   47515 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 19:03:59.649141   47515 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0229 19:03:59.796018   47515 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 19:04:00.868393   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:03.366167   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:00.848016   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:03.347491   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:05.801078   47515 kubeadm.go:322] [apiclient] All control plane components are healthy after 6.003292 seconds
	I0229 19:04:05.820231   47515 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0229 19:04:05.842846   47515 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0229 19:04:06.388308   47515 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0229 19:04:06.388598   47515 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-247197 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0229 19:04:06.905903   47515 kubeadm.go:322] [bootstrap-token] Using token: 42vs85.s8nvx0pxc27k9bgo
	I0229 19:04:06.907650   47515 out.go:204]   - Configuring RBAC rules ...
	I0229 19:04:06.907813   47515 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0229 19:04:06.913716   47515 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0229 19:04:06.925730   47515 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0229 19:04:06.929319   47515 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0229 19:04:06.933110   47515 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0229 19:04:06.938550   47515 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0229 19:04:06.956559   47515 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0229 19:04:07.216913   47515 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0229 19:04:07.320534   47515 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0229 19:04:07.321455   47515 kubeadm.go:322] 
	I0229 19:04:07.321548   47515 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0229 19:04:07.321578   47515 kubeadm.go:322] 
	I0229 19:04:07.321696   47515 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0229 19:04:07.321710   47515 kubeadm.go:322] 
	I0229 19:04:07.321752   47515 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0229 19:04:07.321848   47515 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0229 19:04:07.321914   47515 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0229 19:04:07.321929   47515 kubeadm.go:322] 
	I0229 19:04:07.322021   47515 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0229 19:04:07.322032   47515 kubeadm.go:322] 
	I0229 19:04:07.322099   47515 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0229 19:04:07.322111   47515 kubeadm.go:322] 
	I0229 19:04:07.322182   47515 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0229 19:04:07.322304   47515 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0229 19:04:07.322404   47515 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0229 19:04:07.322416   47515 kubeadm.go:322] 
	I0229 19:04:07.322559   47515 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0229 19:04:07.322679   47515 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0229 19:04:07.322704   47515 kubeadm.go:322] 
	I0229 19:04:07.322808   47515 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 42vs85.s8nvx0pxc27k9bgo \
	I0229 19:04:07.322922   47515 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:aeecb2c9da879b7c90530698bac219eb9b0e3de8de59b6b65ba867f984e81782 \
	I0229 19:04:07.322956   47515 kubeadm.go:322] 	--control-plane 
	I0229 19:04:07.322964   47515 kubeadm.go:322] 
	I0229 19:04:07.323090   47515 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0229 19:04:07.323103   47515 kubeadm.go:322] 
	I0229 19:04:07.323230   47515 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 42vs85.s8nvx0pxc27k9bgo \
	I0229 19:04:07.323408   47515 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:aeecb2c9da879b7c90530698bac219eb9b0e3de8de59b6b65ba867f984e81782 
	I0229 19:04:07.323921   47515 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 19:04:07.323961   47515 cni.go:84] Creating CNI manager for ""
	I0229 19:04:07.323975   47515 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 19:04:07.325925   47515 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 19:04:07.327319   47515 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 19:04:07.387016   47515 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 19:04:07.434438   47515 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 19:04:07.434538   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:07.434554   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f9e09574fdbf0719156a1e892f7aeb8b71f0cf19 minikube.k8s.io/name=no-preload-247197 minikube.k8s.io/updated_at=2024_02_29T19_04_07_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:07.752182   47515 ops.go:34] apiserver oom_adj: -16
	I0229 19:04:07.752320   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:08.955017   47919 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 19:04:08.955134   47919 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 19:04:08.956493   47919 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 19:04:08.956586   47919 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 19:04:08.956684   47919 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 19:04:08.956809   47919 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 19:04:08.956955   47919 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 19:04:08.957116   47919 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 19:04:08.957253   47919 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 19:04:08.957304   47919 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 19:04:08.957375   47919 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 19:04:08.959231   47919 out.go:204]   - Generating certificates and keys ...
	I0229 19:04:08.959317   47919 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 19:04:08.959429   47919 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 19:04:08.959550   47919 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 19:04:08.959637   47919 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 19:04:08.959745   47919 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 19:04:08.959792   47919 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 19:04:08.959851   47919 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 19:04:08.959934   47919 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 19:04:08.960022   47919 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 19:04:08.960099   47919 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 19:04:08.960159   47919 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 19:04:08.960227   47919 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 19:04:08.960303   47919 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 19:04:08.960349   47919 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 19:04:08.960403   47919 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 19:04:08.960462   47919 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 19:04:08.960540   47919 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 19:04:05.369713   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:07.871542   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:08.962078   47919 out.go:204]   - Booting up control plane ...
	I0229 19:04:08.962181   47919 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 19:04:08.962279   47919 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 19:04:08.962361   47919 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 19:04:08.962470   47919 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 19:04:08.962646   47919 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 19:04:08.962689   47919 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 19:04:08.962777   47919 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 19:04:08.962968   47919 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 19:04:08.963056   47919 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 19:04:08.963331   47919 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 19:04:08.963436   47919 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 19:04:08.963646   47919 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 19:04:08.963761   47919 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 19:04:08.963949   47919 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 19:04:08.964053   47919 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 19:04:08.964273   47919 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 19:04:08.964281   47919 kubeadm.go:322] 
	I0229 19:04:08.964313   47919 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 19:04:08.964351   47919 kubeadm.go:322] 	timed out waiting for the condition
	I0229 19:04:08.964358   47919 kubeadm.go:322] 
	I0229 19:04:08.964385   47919 kubeadm.go:322] This error is likely caused by:
	I0229 19:04:08.964441   47919 kubeadm.go:322] 	- The kubelet is not running
	I0229 19:04:08.964547   47919 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 19:04:08.964560   47919 kubeadm.go:322] 
	I0229 19:04:08.964684   47919 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 19:04:08.964734   47919 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 19:04:08.964780   47919 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 19:04:08.964789   47919 kubeadm.go:322] 
	I0229 19:04:08.964922   47919 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 19:04:08.965053   47919 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 19:04:08.965180   47919 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 19:04:08.965255   47919 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 19:04:08.965342   47919 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 19:04:08.965438   47919 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	W0229 19:04:08.965475   47919 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0229 19:04:08.965520   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0229 19:04:09.441915   47919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 19:04:09.459807   47919 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 19:04:09.471061   47919 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 19:04:09.471099   47919 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 19:04:09.532830   47919 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 19:04:09.532979   47919 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 19:04:09.673720   47919 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 19:04:09.673884   47919 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 19:04:09.674071   47919 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 19:04:09.905201   47919 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 19:04:09.906612   47919 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 19:04:09.915393   47919 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 19:04:10.035443   47919 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 19:04:05.845532   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:07.846899   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:09.847708   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:10.037103   47919 out.go:204]   - Generating certificates and keys ...
	I0229 19:04:10.037203   47919 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 19:04:10.037335   47919 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 19:04:10.037453   47919 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 19:04:10.037558   47919 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 19:04:10.037689   47919 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 19:04:10.037832   47919 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 19:04:10.038465   47919 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 19:04:10.038932   47919 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 19:04:10.039471   47919 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 19:04:10.039874   47919 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 19:04:10.039961   47919 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 19:04:10.040045   47919 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 19:04:10.157741   47919 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 19:04:10.426271   47919 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 19:04:10.528768   47919 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 19:04:10.595099   47919 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 19:04:10.596020   47919 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 19:04:08.252779   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:08.753332   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:09.252867   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:09.752631   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:10.253281   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:10.753138   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:11.253104   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:11.752894   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:12.253271   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:12.753046   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:10.367912   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:12.870689   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:10.597781   47919 out.go:204]   - Booting up control plane ...
	I0229 19:04:10.597872   47919 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 19:04:10.602307   47919 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 19:04:10.603371   47919 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 19:04:10.604660   47919 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 19:04:10.607876   47919 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 19:04:12.346304   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:14.346555   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:13.252668   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:13.752660   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:14.252803   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:14.752360   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:15.252343   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:15.752568   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:16.252484   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:16.752977   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:17.253148   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:17.753112   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:15.366706   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:17.867839   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:18.253109   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:18.753221   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:19.253179   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:19.752851   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:19.875013   47515 kubeadm.go:1088] duration metric: took 12.44055176s to wait for elevateKubeSystemPrivileges.
	I0229 19:04:19.875056   47515 kubeadm.go:406] StartCluster complete in 5m26.137187745s
	I0229 19:04:19.875078   47515 settings.go:142] acquiring lock: {Name:mk2120f70b8c0f8e9d58905a579415af500b3723 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:04:19.875156   47515 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18259-6428/kubeconfig
	I0229 19:04:19.876716   47515 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/kubeconfig: {Name:mk7125f243525b7f0feb85371d9c568ed8e0cf7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:04:19.876957   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 19:04:19.877116   47515 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 19:04:19.877196   47515 addons.go:69] Setting storage-provisioner=true in profile "no-preload-247197"
	I0229 19:04:19.877207   47515 config.go:182] Loaded profile config "no-preload-247197": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0229 19:04:19.877222   47515 addons.go:69] Setting metrics-server=true in profile "no-preload-247197"
	I0229 19:04:19.877208   47515 addons.go:69] Setting default-storageclass=true in profile "no-preload-247197"
	I0229 19:04:19.877269   47515 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-247197"
	I0229 19:04:19.877213   47515 addons.go:234] Setting addon storage-provisioner=true in "no-preload-247197"
	W0229 19:04:19.877372   47515 addons.go:243] addon storage-provisioner should already be in state true
	I0229 19:04:19.877412   47515 host.go:66] Checking if "no-preload-247197" exists ...
	I0229 19:04:19.877244   47515 addons.go:234] Setting addon metrics-server=true in "no-preload-247197"
	W0229 19:04:19.877465   47515 addons.go:243] addon metrics-server should already be in state true
	I0229 19:04:19.877519   47515 host.go:66] Checking if "no-preload-247197" exists ...
	I0229 19:04:19.877697   47515 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:04:19.877734   47515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:04:19.877787   47515 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:04:19.877822   47515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:04:19.877875   47515 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:04:19.877905   47515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:04:19.895578   47515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37865
	I0229 19:04:19.896005   47515 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:04:19.896491   47515 main.go:141] libmachine: Using API Version  1
	I0229 19:04:19.896516   47515 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:04:19.897033   47515 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:04:19.897628   47515 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:04:19.897677   47515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:04:19.897705   47515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35601
	I0229 19:04:19.897711   47515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37627
	I0229 19:04:19.898072   47515 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:04:19.898171   47515 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:04:19.898512   47515 main.go:141] libmachine: Using API Version  1
	I0229 19:04:19.898533   47515 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:04:19.898653   47515 main.go:141] libmachine: Using API Version  1
	I0229 19:04:19.898674   47515 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:04:19.898854   47515 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:04:19.899002   47515 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:04:19.899159   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetState
	I0229 19:04:19.899386   47515 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:04:19.899433   47515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:04:19.902917   47515 addons.go:234] Setting addon default-storageclass=true in "no-preload-247197"
	W0229 19:04:19.902937   47515 addons.go:243] addon default-storageclass should already be in state true
	I0229 19:04:19.902965   47515 host.go:66] Checking if "no-preload-247197" exists ...
	I0229 19:04:19.903374   47515 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:04:19.903492   47515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:04:19.915592   47515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45771
	I0229 19:04:19.916152   47515 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:04:19.916347   47515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46249
	I0229 19:04:19.916677   47515 main.go:141] libmachine: Using API Version  1
	I0229 19:04:19.916694   47515 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:04:19.916799   47515 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:04:19.917168   47515 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:04:19.917302   47515 main.go:141] libmachine: Using API Version  1
	I0229 19:04:19.917314   47515 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:04:19.917505   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetState
	I0229 19:04:19.918075   47515 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:04:19.918253   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetState
	I0229 19:04:19.918351   47515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39265
	I0229 19:04:19.918773   47515 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:04:19.919153   47515 main.go:141] libmachine: Using API Version  1
	I0229 19:04:19.919170   47515 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:04:19.919631   47515 main.go:141] libmachine: (no-preload-247197) Calling .DriverName
	I0229 19:04:19.919999   47515 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:04:19.922165   47515 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0229 19:04:19.920215   47515 main.go:141] libmachine: (no-preload-247197) Calling .DriverName
	I0229 19:04:19.920473   47515 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:04:19.923441   47515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:04:19.923454   47515 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0229 19:04:19.923466   47515 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0229 19:04:19.923481   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHHostname
	I0229 19:04:19.924990   47515 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 19:04:16.845870   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:18.845928   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:19.926366   47515 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 19:04:19.926372   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 19:04:19.926384   47515 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 19:04:19.926402   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHHostname
	I0229 19:04:19.926728   47515 main.go:141] libmachine: (no-preload-247197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2f:53", ip: ""} in network mk-no-preload-247197: {Iface:virbr2 ExpiryTime:2024-02-29 19:58:22 +0000 UTC Type:0 Mac:52:54:00:2c:2f:53 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:no-preload-247197 Clientid:01:52:54:00:2c:2f:53}
	I0229 19:04:19.926752   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined IP address 192.168.50.72 and MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 19:04:19.926908   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHPort
	I0229 19:04:19.927072   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHKeyPath
	I0229 19:04:19.927216   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHUsername
	I0229 19:04:19.927357   47515 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/no-preload-247197/id_rsa Username:docker}
	I0229 19:04:19.929366   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 19:04:19.929709   47515 main.go:141] libmachine: (no-preload-247197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2f:53", ip: ""} in network mk-no-preload-247197: {Iface:virbr2 ExpiryTime:2024-02-29 19:58:22 +0000 UTC Type:0 Mac:52:54:00:2c:2f:53 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:no-preload-247197 Clientid:01:52:54:00:2c:2f:53}
	I0229 19:04:19.929728   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined IP address 192.168.50.72 and MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 19:04:19.929855   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHPort
	I0229 19:04:19.930000   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHKeyPath
	I0229 19:04:19.930090   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHUsername
	I0229 19:04:19.930171   47515 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/no-preload-247197/id_rsa Username:docker}
	I0229 19:04:19.940292   47515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45171
	I0229 19:04:19.940856   47515 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:04:19.941327   47515 main.go:141] libmachine: Using API Version  1
	I0229 19:04:19.941354   47515 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:04:19.941647   47515 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:04:19.941817   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetState
	I0229 19:04:19.943378   47515 main.go:141] libmachine: (no-preload-247197) Calling .DriverName
	I0229 19:04:19.943608   47515 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 19:04:19.943624   47515 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 19:04:19.943640   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHHostname
	I0229 19:04:19.946715   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 19:04:19.947112   47515 main.go:141] libmachine: (no-preload-247197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2f:53", ip: ""} in network mk-no-preload-247197: {Iface:virbr2 ExpiryTime:2024-02-29 19:58:22 +0000 UTC Type:0 Mac:52:54:00:2c:2f:53 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:no-preload-247197 Clientid:01:52:54:00:2c:2f:53}
	I0229 19:04:19.947132   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined IP address 192.168.50.72 and MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 19:04:19.947413   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHPort
	I0229 19:04:19.947546   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHKeyPath
	I0229 19:04:19.947672   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHUsername
	I0229 19:04:19.947795   47515 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/no-preload-247197/id_rsa Username:docker}
	I0229 19:04:20.159078   47515 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 19:04:20.246059   47515 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0229 19:04:20.246085   47515 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0229 19:04:20.338238   47515 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0229 19:04:20.338261   47515 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0229 19:04:20.365954   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0229 19:04:20.383186   47515 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-247197" context rescaled to 1 replicas
	I0229 19:04:20.383231   47515 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.72 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0229 19:04:20.385225   47515 out.go:177] * Verifying Kubernetes components...
	I0229 19:04:20.386616   47515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 19:04:20.395136   47515 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 19:04:20.442555   47515 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 19:04:20.442575   47515 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0229 19:04:20.584731   47515 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 19:04:21.931286   47515 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.772173305s)
	I0229 19:04:21.931338   47515 main.go:141] libmachine: Making call to close driver server
	I0229 19:04:21.931350   47515 main.go:141] libmachine: (no-preload-247197) Calling .Close
	I0229 19:04:21.931346   47515 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.565356284s)
	I0229 19:04:21.931374   47515 start.go:929] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0229 19:04:21.931413   47515 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.544778173s)
	I0229 19:04:21.931439   47515 node_ready.go:35] waiting up to 6m0s for node "no-preload-247197" to be "Ready" ...
	I0229 19:04:21.931456   47515 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.536286802s)
	I0229 19:04:21.931484   47515 main.go:141] libmachine: Making call to close driver server
	I0229 19:04:21.931493   47515 main.go:141] libmachine: (no-preload-247197) Calling .Close
	I0229 19:04:21.932214   47515 main.go:141] libmachine: (no-preload-247197) DBG | Closing plugin on server side
	I0229 19:04:21.932216   47515 main.go:141] libmachine: (no-preload-247197) DBG | Closing plugin on server side
	I0229 19:04:21.932230   47515 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:04:21.932243   47515 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:04:21.932252   47515 main.go:141] libmachine: Making call to close driver server
	I0229 19:04:21.932269   47515 main.go:141] libmachine: (no-preload-247197) Calling .Close
	I0229 19:04:21.932251   47515 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:04:21.932321   47515 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:04:21.932330   47515 main.go:141] libmachine: Making call to close driver server
	I0229 19:04:21.932340   47515 main.go:141] libmachine: (no-preload-247197) Calling .Close
	I0229 19:04:21.932458   47515 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:04:21.932470   47515 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:04:21.932629   47515 main.go:141] libmachine: (no-preload-247197) DBG | Closing plugin on server side
	I0229 19:04:21.932649   47515 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:04:21.932656   47515 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:04:21.949312   47515 main.go:141] libmachine: Making call to close driver server
	I0229 19:04:21.949338   47515 main.go:141] libmachine: (no-preload-247197) Calling .Close
	I0229 19:04:21.949619   47515 main.go:141] libmachine: (no-preload-247197) DBG | Closing plugin on server side
	I0229 19:04:21.949662   47515 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:04:21.949675   47515 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:04:21.951119   47515 node_ready.go:49] node "no-preload-247197" has status "Ready":"True"
	I0229 19:04:21.951138   47515 node_ready.go:38] duration metric: took 19.687343ms waiting for node "no-preload-247197" to be "Ready" ...
	I0229 19:04:21.951148   47515 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 19:04:21.965909   47515 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-4k6hl" in "kube-system" namespace to be "Ready" ...
	I0229 19:04:21.979164   47515 pod_ready.go:92] pod "coredns-76f75df574-4k6hl" in "kube-system" namespace has status "Ready":"True"
	I0229 19:04:21.979185   47515 pod_ready.go:81] duration metric: took 13.25328ms waiting for pod "coredns-76f75df574-4k6hl" in "kube-system" namespace to be "Ready" ...
	I0229 19:04:21.979197   47515 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-9z6k5" in "kube-system" namespace to be "Ready" ...
	I0229 19:04:21.987905   47515 pod_ready.go:92] pod "coredns-76f75df574-9z6k5" in "kube-system" namespace has status "Ready":"True"
	I0229 19:04:21.987924   47515 pod_ready.go:81] duration metric: took 8.719445ms waiting for pod "coredns-76f75df574-9z6k5" in "kube-system" namespace to be "Ready" ...
	I0229 19:04:21.987935   47515 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-247197" in "kube-system" namespace to be "Ready" ...
	I0229 19:04:21.992310   47515 pod_ready.go:92] pod "etcd-no-preload-247197" in "kube-system" namespace has status "Ready":"True"
	I0229 19:04:21.992328   47515 pod_ready.go:81] duration metric: took 4.385196ms waiting for pod "etcd-no-preload-247197" in "kube-system" namespace to be "Ready" ...
	I0229 19:04:21.992339   47515 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-247197" in "kube-system" namespace to be "Ready" ...
	I0229 19:04:21.999702   47515 pod_ready.go:92] pod "kube-apiserver-no-preload-247197" in "kube-system" namespace has status "Ready":"True"
	I0229 19:04:21.999722   47515 pod_ready.go:81] duration metric: took 7.374368ms waiting for pod "kube-apiserver-no-preload-247197" in "kube-system" namespace to be "Ready" ...
	I0229 19:04:21.999733   47515 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-247197" in "kube-system" namespace to be "Ready" ...
	I0229 19:04:22.010201   47515 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.425431238s)
	I0229 19:04:22.010236   47515 main.go:141] libmachine: Making call to close driver server
	I0229 19:04:22.010249   47515 main.go:141] libmachine: (no-preload-247197) Calling .Close
	I0229 19:04:22.010564   47515 main.go:141] libmachine: (no-preload-247197) DBG | Closing plugin on server side
	I0229 19:04:22.010605   47515 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:04:22.010614   47515 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:04:22.010635   47515 main.go:141] libmachine: Making call to close driver server
	I0229 19:04:22.010644   47515 main.go:141] libmachine: (no-preload-247197) Calling .Close
	I0229 19:04:22.010882   47515 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:04:22.010900   47515 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:04:22.010910   47515 main.go:141] libmachine: (no-preload-247197) DBG | Closing plugin on server side
	I0229 19:04:22.010910   47515 addons.go:470] Verifying addon metrics-server=true in "no-preload-247197"
	I0229 19:04:22.013314   47515 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0229 19:04:22.014366   47515 addons.go:505] enable addons completed in 2.137254118s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0229 19:04:22.338772   47515 pod_ready.go:92] pod "kube-controller-manager-no-preload-247197" in "kube-system" namespace has status "Ready":"True"
	I0229 19:04:22.338799   47515 pod_ready.go:81] duration metric: took 339.058404ms waiting for pod "kube-controller-manager-no-preload-247197" in "kube-system" namespace to be "Ready" ...
	I0229 19:04:22.338812   47515 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vvkjv" in "kube-system" namespace to be "Ready" ...
	I0229 19:04:22.737254   47515 pod_ready.go:92] pod "kube-proxy-vvkjv" in "kube-system" namespace has status "Ready":"True"
	I0229 19:04:22.737280   47515 pod_ready.go:81] duration metric: took 398.461074ms waiting for pod "kube-proxy-vvkjv" in "kube-system" namespace to be "Ready" ...
	I0229 19:04:22.737294   47515 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-247197" in "kube-system" namespace to be "Ready" ...
	I0229 19:04:20.370710   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:22.866800   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:20.846680   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:23.345140   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:23.135406   47515 pod_ready.go:92] pod "kube-scheduler-no-preload-247197" in "kube-system" namespace has status "Ready":"True"
	I0229 19:04:23.135428   47515 pod_ready.go:81] duration metric: took 398.125345ms waiting for pod "kube-scheduler-no-preload-247197" in "kube-system" namespace to be "Ready" ...
	I0229 19:04:23.135440   47515 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace to be "Ready" ...
	I0229 19:04:25.142619   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:27.143696   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:25.367175   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:27.380854   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:25.346266   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:27.844825   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:29.846222   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:29.642557   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:32.143195   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:29.866361   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:32.365864   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:32.344240   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:34.345406   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:34.642612   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:36.642921   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:34.366701   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:36.865897   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:38.866354   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:36.845225   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:39.344488   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:39.142773   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:41.643462   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:40.866485   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:43.367569   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:41.345439   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:43.346065   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:44.142927   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:46.642548   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:45.369460   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:47.867209   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:45.845033   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:47.845603   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:48.643538   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:51.143346   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:50.365414   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:52.366281   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:50.609556   47919 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 19:04:50.610341   47919 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 19:04:50.610592   47919 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 19:04:50.347163   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:52.846321   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:54.847146   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:53.643605   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:55.644824   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:54.866162   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:57.366119   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:55.610941   47919 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 19:04:55.611235   47919 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 19:04:57.345852   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:59.846768   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:58.141799   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:00.142827   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:02.642593   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:59.867791   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:02.366238   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:02.345863   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:04.844340   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:04.643708   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:07.142551   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:04.367016   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:06.866170   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:08.869317   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:05.611726   47919 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 19:05:05.611996   47919 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 19:05:06.846686   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:08.846956   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:09.143595   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:11.143779   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:11.367337   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:13.865929   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:11.345732   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:13.346279   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:13.644332   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:16.143576   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:15.866653   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:18.366706   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:15.844887   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:17.846717   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:18.642599   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:20.642837   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:22.643895   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:20.368483   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:22.866758   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:20.346170   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:22.845477   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:25.142628   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:27.643975   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:25.366726   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:27.866780   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:25.612622   47919 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 19:05:25.612856   47919 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 19:05:25.346171   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:27.346624   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:29.844724   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:30.142942   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:32.143445   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:30.367152   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:32.865657   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:31.845835   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:34.347482   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:34.642780   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:36.642919   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:34.870444   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:37.367617   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:36.844507   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:38.845472   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:38.643505   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:41.142928   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:39.865207   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:41.867210   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:41.344604   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:43.347346   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:43.143348   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:45.143659   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:47.643054   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:44.366192   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:46.368043   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:48.867455   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:45.844395   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:47.845753   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:50.143481   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:52.643947   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:51.365758   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:53.866493   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:50.344819   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:52.346315   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:54.845777   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:55.145751   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:57.644326   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:55.866532   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:57.866831   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:56.845928   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:59.345840   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:00.142068   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:02.142779   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:59.870256   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:02.365280   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:01.845248   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:04.347842   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:05.613204   47919 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 19:06:05.613467   47919 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 19:06:05.613495   47919 kubeadm.go:322] 
	I0229 19:06:05.613547   47919 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 19:06:05.613598   47919 kubeadm.go:322] 	timed out waiting for the condition
	I0229 19:06:05.613608   47919 kubeadm.go:322] 
	I0229 19:06:05.613653   47919 kubeadm.go:322] This error is likely caused by:
	I0229 19:06:05.613694   47919 kubeadm.go:322] 	- The kubelet is not running
	I0229 19:06:05.613814   47919 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 19:06:05.613823   47919 kubeadm.go:322] 
	I0229 19:06:05.613911   47919 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 19:06:05.613941   47919 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 19:06:05.613974   47919 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 19:06:05.613980   47919 kubeadm.go:322] 
	I0229 19:06:05.614107   47919 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 19:06:05.614240   47919 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 19:06:05.614361   47919 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 19:06:05.614432   47919 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 19:06:05.614533   47919 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 19:06:05.614577   47919 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0229 19:06:05.615575   47919 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 19:06:05.615689   47919 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 19:06:05.615765   47919 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 19:06:05.615822   47919 kubeadm.go:406] StartCluster complete in 8m8.067253054s
	I0229 19:06:05.615873   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:06:05.615920   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:06:05.671959   47919 cri.go:89] found id: ""
	I0229 19:06:05.671998   47919 logs.go:276] 0 containers: []
	W0229 19:06:05.672018   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:06:05.672025   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:06:05.672075   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:06:05.715832   47919 cri.go:89] found id: ""
	I0229 19:06:05.715853   47919 logs.go:276] 0 containers: []
	W0229 19:06:05.715860   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:06:05.715866   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:06:05.715911   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:06:05.755305   47919 cri.go:89] found id: ""
	I0229 19:06:05.755334   47919 logs.go:276] 0 containers: []
	W0229 19:06:05.755345   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:06:05.755351   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:06:05.755409   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:06:05.807907   47919 cri.go:89] found id: ""
	I0229 19:06:05.807938   47919 logs.go:276] 0 containers: []
	W0229 19:06:05.807950   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:06:05.807957   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:06:05.808015   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:06:05.892777   47919 cri.go:89] found id: ""
	I0229 19:06:05.892805   47919 logs.go:276] 0 containers: []
	W0229 19:06:05.892813   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:06:05.892818   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:06:05.892877   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:06:05.931488   47919 cri.go:89] found id: ""
	I0229 19:06:05.931516   47919 logs.go:276] 0 containers: []
	W0229 19:06:05.931527   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:06:05.931534   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:06:05.931578   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:06:05.971989   47919 cri.go:89] found id: ""
	I0229 19:06:05.972018   47919 logs.go:276] 0 containers: []
	W0229 19:06:05.972030   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:06:05.972037   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:06:05.972112   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:06:06.008174   47919 cri.go:89] found id: ""
	I0229 19:06:06.008198   47919 logs.go:276] 0 containers: []
	W0229 19:06:06.008208   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:06:06.008224   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:06:06.008241   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:06:06.024924   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:06:06.024953   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:06:06.111879   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:06:06.111904   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:06:06.111918   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:06:06.221563   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:06:06.221593   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:06:06.266861   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:06:06.266897   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 19:06:06.314923   47919 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0229 19:06:06.314971   47919 out.go:239] * 
	W0229 19:06:06.315043   47919 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 19:06:06.315065   47919 out.go:239] * 
	W0229 19:06:06.315824   47919 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 19:06:06.318988   47919 out.go:177] 
	W0229 19:06:06.320200   47919 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 19:06:06.320245   47919 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0229 19:06:06.320270   47919 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0229 19:06:06.321598   47919 out.go:177] 
	I0229 19:06:04.143707   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:06.145980   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:04.366140   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:06.366873   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:08.366955   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:06.852698   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:09.348579   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:08.643671   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:11.143678   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:10.865166   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:12.866971   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:11.845538   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:14.346445   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:13.642537   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:15.643262   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:17.647209   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:15.366149   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:17.367209   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:16.845485   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:18.845671   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:19.647627   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:22.145622   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:19.866267   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:21.866857   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:20.845841   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:23.349149   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:24.646242   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:27.143078   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:24.368344   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:26.867329   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:25.846273   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:28.346226   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:29.642886   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:31.646657   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:29.365191   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:31.366142   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:33.865692   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:30.845019   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:32.845500   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:34.142811   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:36.144736   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:35.870114   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:38.365999   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:35.347102   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:37.347579   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:39.845962   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:38.642930   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:40.642989   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:42.645337   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:40.366651   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:42.865651   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:41.846699   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:44.348062   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:45.145291   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:47.643786   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:44.866389   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:47.365775   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:46.844303   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:48.845366   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:50.143250   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:52.642758   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:49.366973   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:51.865400   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:53.868123   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:51.345427   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:53.346292   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:54.643044   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:56.643641   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:56.366088   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:58.865505   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:55.845353   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:58.345421   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:58.644239   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:01.142462   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:01.374753   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:03.866228   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:00.345809   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:01.845528   47608 pod_ready.go:81] duration metric: took 4m0.007876165s waiting for pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace to be "Ready" ...
	E0229 19:07:01.845551   47608 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0229 19:07:01.845562   47608 pod_ready.go:38] duration metric: took 4m0.790976213s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 19:07:01.845581   47608 api_server.go:52] waiting for apiserver process to appear ...
	I0229 19:07:01.845611   47608 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:07:01.845671   47608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:07:01.901601   47608 cri.go:89] found id: "18f508cd43779fa9b03e93445f4d03abfe5f3e6291dc05cf116f16714d04ec96"
	I0229 19:07:01.901625   47608 cri.go:89] found id: ""
	I0229 19:07:01.901636   47608 logs.go:276] 1 containers: [18f508cd43779fa9b03e93445f4d03abfe5f3e6291dc05cf116f16714d04ec96]
	I0229 19:07:01.901693   47608 ssh_runner.go:195] Run: which crictl
	I0229 19:07:01.906698   47608 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:07:01.906771   47608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:07:01.947360   47608 cri.go:89] found id: "795516eef7b679f1eefaf293eab6b1683326e3fcb3f4e5f848194e4a7ab1566e"
	I0229 19:07:01.947383   47608 cri.go:89] found id: ""
	I0229 19:07:01.947395   47608 logs.go:276] 1 containers: [795516eef7b679f1eefaf293eab6b1683326e3fcb3f4e5f848194e4a7ab1566e]
	I0229 19:07:01.947453   47608 ssh_runner.go:195] Run: which crictl
	I0229 19:07:01.952251   47608 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:07:01.952314   47608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:07:01.996254   47608 cri.go:89] found id: "7220454898e1219d80ac33161ddc4116866d5cf1d1382767c1bf456cfa6b3c72"
	I0229 19:07:01.996279   47608 cri.go:89] found id: ""
	I0229 19:07:01.996289   47608 logs.go:276] 1 containers: [7220454898e1219d80ac33161ddc4116866d5cf1d1382767c1bf456cfa6b3c72]
	I0229 19:07:01.996346   47608 ssh_runner.go:195] Run: which crictl
	I0229 19:07:02.001158   47608 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:07:02.001229   47608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:07:02.039559   47608 cri.go:89] found id: "f1accc151694b40814505327a0743f46796dacf516f08043f58b1c42147e25fe"
	I0229 19:07:02.039583   47608 cri.go:89] found id: ""
	I0229 19:07:02.039593   47608 logs.go:276] 1 containers: [f1accc151694b40814505327a0743f46796dacf516f08043f58b1c42147e25fe]
	I0229 19:07:02.039653   47608 ssh_runner.go:195] Run: which crictl
	I0229 19:07:02.045320   47608 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:07:02.045439   47608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:07:02.091908   47608 cri.go:89] found id: "3327a9756b71a79f82db55da77bd2297066ea76c84094764ff2fcbd04e4f528d"
	I0229 19:07:02.091932   47608 cri.go:89] found id: ""
	I0229 19:07:02.091941   47608 logs.go:276] 1 containers: [3327a9756b71a79f82db55da77bd2297066ea76c84094764ff2fcbd04e4f528d]
	I0229 19:07:02.092002   47608 ssh_runner.go:195] Run: which crictl
	I0229 19:07:02.097461   47608 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:07:02.097533   47608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:07:02.142993   47608 cri.go:89] found id: "9099ab49263e57e41e6b7b28a87e810a951b495e3d0c879bf7ac045d0d2c2bd0"
	I0229 19:07:02.143017   47608 cri.go:89] found id: ""
	I0229 19:07:02.143043   47608 logs.go:276] 1 containers: [9099ab49263e57e41e6b7b28a87e810a951b495e3d0c879bf7ac045d0d2c2bd0]
	I0229 19:07:02.143114   47608 ssh_runner.go:195] Run: which crictl
	I0229 19:07:02.148395   47608 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:07:02.148469   47608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:07:02.189479   47608 cri.go:89] found id: ""
	I0229 19:07:02.189500   47608 logs.go:276] 0 containers: []
	W0229 19:07:02.189508   47608 logs.go:278] No container was found matching "kindnet"
	I0229 19:07:02.189513   47608 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 19:07:02.189567   47608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 19:07:02.237218   47608 cri.go:89] found id: "6d4d0c25cc63929278e7e44eb4cd9bc93f376b8bb76a05b78a29d9d9b9794ada"
	I0229 19:07:02.237238   47608 cri.go:89] found id: ""
	I0229 19:07:02.237246   47608 logs.go:276] 1 containers: [6d4d0c25cc63929278e7e44eb4cd9bc93f376b8bb76a05b78a29d9d9b9794ada]
	I0229 19:07:02.237299   47608 ssh_runner.go:195] Run: which crictl
	I0229 19:07:02.242232   47608 logs.go:123] Gathering logs for dmesg ...
	I0229 19:07:02.242256   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:07:02.258190   47608 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:07:02.258213   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 19:07:02.401759   47608 logs.go:123] Gathering logs for etcd [795516eef7b679f1eefaf293eab6b1683326e3fcb3f4e5f848194e4a7ab1566e] ...
	I0229 19:07:02.401786   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 795516eef7b679f1eefaf293eab6b1683326e3fcb3f4e5f848194e4a7ab1566e"
	I0229 19:07:02.455230   47608 logs.go:123] Gathering logs for kube-controller-manager [9099ab49263e57e41e6b7b28a87e810a951b495e3d0c879bf7ac045d0d2c2bd0] ...
	I0229 19:07:02.455256   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9099ab49263e57e41e6b7b28a87e810a951b495e3d0c879bf7ac045d0d2c2bd0"
	I0229 19:07:02.507842   47608 logs.go:123] Gathering logs for container status ...
	I0229 19:07:02.507870   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:07:02.562721   47608 logs.go:123] Gathering logs for kubelet ...
	I0229 19:07:02.562747   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:07:02.655664   47608 logs.go:123] Gathering logs for kube-apiserver [18f508cd43779fa9b03e93445f4d03abfe5f3e6291dc05cf116f16714d04ec96] ...
	I0229 19:07:02.655696   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18f508cd43779fa9b03e93445f4d03abfe5f3e6291dc05cf116f16714d04ec96"
	I0229 19:07:02.711422   47608 logs.go:123] Gathering logs for coredns [7220454898e1219d80ac33161ddc4116866d5cf1d1382767c1bf456cfa6b3c72] ...
	I0229 19:07:02.711450   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7220454898e1219d80ac33161ddc4116866d5cf1d1382767c1bf456cfa6b3c72"
	I0229 19:07:02.763124   47608 logs.go:123] Gathering logs for kube-scheduler [f1accc151694b40814505327a0743f46796dacf516f08043f58b1c42147e25fe] ...
	I0229 19:07:02.763151   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1accc151694b40814505327a0743f46796dacf516f08043f58b1c42147e25fe"
	I0229 19:07:02.812093   47608 logs.go:123] Gathering logs for kube-proxy [3327a9756b71a79f82db55da77bd2297066ea76c84094764ff2fcbd04e4f528d] ...
	I0229 19:07:02.812126   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3327a9756b71a79f82db55da77bd2297066ea76c84094764ff2fcbd04e4f528d"
	I0229 19:07:02.863781   47608 logs.go:123] Gathering logs for storage-provisioner [6d4d0c25cc63929278e7e44eb4cd9bc93f376b8bb76a05b78a29d9d9b9794ada] ...
	I0229 19:07:02.863810   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d4d0c25cc63929278e7e44eb4cd9bc93f376b8bb76a05b78a29d9d9b9794ada"
	I0229 19:07:02.909931   47608 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:07:02.909956   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:07:03.148571   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:05.642292   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:07.646950   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:05.866773   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:08.364842   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:05.846592   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:07:05.868139   47608 api_server.go:72] duration metric: took 4m6.97199894s to wait for apiserver process to appear ...
	I0229 19:07:05.868162   47608 api_server.go:88] waiting for apiserver healthz status ...
	I0229 19:07:05.868198   47608 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:07:05.868254   47608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:07:05.911179   47608 cri.go:89] found id: "18f508cd43779fa9b03e93445f4d03abfe5f3e6291dc05cf116f16714d04ec96"
	I0229 19:07:05.911204   47608 cri.go:89] found id: ""
	I0229 19:07:05.911213   47608 logs.go:276] 1 containers: [18f508cd43779fa9b03e93445f4d03abfe5f3e6291dc05cf116f16714d04ec96]
	I0229 19:07:05.911283   47608 ssh_runner.go:195] Run: which crictl
	I0229 19:07:05.917051   47608 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:07:05.917127   47608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:07:05.958278   47608 cri.go:89] found id: "795516eef7b679f1eefaf293eab6b1683326e3fcb3f4e5f848194e4a7ab1566e"
	I0229 19:07:05.958304   47608 cri.go:89] found id: ""
	I0229 19:07:05.958312   47608 logs.go:276] 1 containers: [795516eef7b679f1eefaf293eab6b1683326e3fcb3f4e5f848194e4a7ab1566e]
	I0229 19:07:05.958366   47608 ssh_runner.go:195] Run: which crictl
	I0229 19:07:05.963467   47608 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:07:05.963538   47608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:07:06.003497   47608 cri.go:89] found id: "7220454898e1219d80ac33161ddc4116866d5cf1d1382767c1bf456cfa6b3c72"
	I0229 19:07:06.003516   47608 cri.go:89] found id: ""
	I0229 19:07:06.003525   47608 logs.go:276] 1 containers: [7220454898e1219d80ac33161ddc4116866d5cf1d1382767c1bf456cfa6b3c72]
	I0229 19:07:06.003578   47608 ssh_runner.go:195] Run: which crictl
	I0229 19:07:06.008829   47608 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:07:06.008900   47608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:07:06.048632   47608 cri.go:89] found id: "f1accc151694b40814505327a0743f46796dacf516f08043f58b1c42147e25fe"
	I0229 19:07:06.048654   47608 cri.go:89] found id: ""
	I0229 19:07:06.048662   47608 logs.go:276] 1 containers: [f1accc151694b40814505327a0743f46796dacf516f08043f58b1c42147e25fe]
	I0229 19:07:06.048719   47608 ssh_runner.go:195] Run: which crictl
	I0229 19:07:06.053674   47608 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:07:06.053725   47608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:07:06.095377   47608 cri.go:89] found id: "3327a9756b71a79f82db55da77bd2297066ea76c84094764ff2fcbd04e4f528d"
	I0229 19:07:06.095398   47608 cri.go:89] found id: ""
	I0229 19:07:06.095406   47608 logs.go:276] 1 containers: [3327a9756b71a79f82db55da77bd2297066ea76c84094764ff2fcbd04e4f528d]
	I0229 19:07:06.095455   47608 ssh_runner.go:195] Run: which crictl
	I0229 19:07:06.100277   47608 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:07:06.100344   47608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:07:06.141330   47608 cri.go:89] found id: "9099ab49263e57e41e6b7b28a87e810a951b495e3d0c879bf7ac045d0d2c2bd0"
	I0229 19:07:06.141351   47608 cri.go:89] found id: ""
	I0229 19:07:06.141361   47608 logs.go:276] 1 containers: [9099ab49263e57e41e6b7b28a87e810a951b495e3d0c879bf7ac045d0d2c2bd0]
	I0229 19:07:06.141418   47608 ssh_runner.go:195] Run: which crictl
	I0229 19:07:06.146628   47608 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:07:06.146675   47608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:07:06.195525   47608 cri.go:89] found id: ""
	I0229 19:07:06.195552   47608 logs.go:276] 0 containers: []
	W0229 19:07:06.195563   47608 logs.go:278] No container was found matching "kindnet"
	I0229 19:07:06.195570   47608 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 19:07:06.195626   47608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 19:07:06.242893   47608 cri.go:89] found id: "6d4d0c25cc63929278e7e44eb4cd9bc93f376b8bb76a05b78a29d9d9b9794ada"
	I0229 19:07:06.242912   47608 cri.go:89] found id: ""
	I0229 19:07:06.242918   47608 logs.go:276] 1 containers: [6d4d0c25cc63929278e7e44eb4cd9bc93f376b8bb76a05b78a29d9d9b9794ada]
	I0229 19:07:06.242963   47608 ssh_runner.go:195] Run: which crictl
	I0229 19:07:06.247876   47608 logs.go:123] Gathering logs for dmesg ...
	I0229 19:07:06.247894   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:07:06.264869   47608 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:07:06.264905   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 19:07:06.403612   47608 logs.go:123] Gathering logs for kube-apiserver [18f508cd43779fa9b03e93445f4d03abfe5f3e6291dc05cf116f16714d04ec96] ...
	I0229 19:07:06.403639   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18f508cd43779fa9b03e93445f4d03abfe5f3e6291dc05cf116f16714d04ec96"
	I0229 19:07:06.468541   47608 logs.go:123] Gathering logs for etcd [795516eef7b679f1eefaf293eab6b1683326e3fcb3f4e5f848194e4a7ab1566e] ...
	I0229 19:07:06.468569   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 795516eef7b679f1eefaf293eab6b1683326e3fcb3f4e5f848194e4a7ab1566e"
	I0229 19:07:06.523984   47608 logs.go:123] Gathering logs for kube-proxy [3327a9756b71a79f82db55da77bd2297066ea76c84094764ff2fcbd04e4f528d] ...
	I0229 19:07:06.524016   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3327a9756b71a79f82db55da77bd2297066ea76c84094764ff2fcbd04e4f528d"
	I0229 19:07:06.599105   47608 logs.go:123] Gathering logs for kube-controller-manager [9099ab49263e57e41e6b7b28a87e810a951b495e3d0c879bf7ac045d0d2c2bd0] ...
	I0229 19:07:06.599133   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9099ab49263e57e41e6b7b28a87e810a951b495e3d0c879bf7ac045d0d2c2bd0"
	I0229 19:07:06.672044   47608 logs.go:123] Gathering logs for kubelet ...
	I0229 19:07:06.672074   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:07:06.772478   47608 logs.go:123] Gathering logs for coredns [7220454898e1219d80ac33161ddc4116866d5cf1d1382767c1bf456cfa6b3c72] ...
	I0229 19:07:06.772509   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7220454898e1219d80ac33161ddc4116866d5cf1d1382767c1bf456cfa6b3c72"
	I0229 19:07:06.817949   47608 logs.go:123] Gathering logs for kube-scheduler [f1accc151694b40814505327a0743f46796dacf516f08043f58b1c42147e25fe] ...
	I0229 19:07:06.817978   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1accc151694b40814505327a0743f46796dacf516f08043f58b1c42147e25fe"
	I0229 19:07:06.866713   47608 logs.go:123] Gathering logs for storage-provisioner [6d4d0c25cc63929278e7e44eb4cd9bc93f376b8bb76a05b78a29d9d9b9794ada] ...
	I0229 19:07:06.866743   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d4d0c25cc63929278e7e44eb4cd9bc93f376b8bb76a05b78a29d9d9b9794ada"
	I0229 19:07:06.912206   47608 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:07:06.912234   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:07:07.320100   47608 logs.go:123] Gathering logs for container status ...
	I0229 19:07:07.320136   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:07:09.875603   47608 api_server.go:253] Checking apiserver healthz at https://192.168.61.34:8443/healthz ...
	I0229 19:07:09.884525   47608 api_server.go:279] https://192.168.61.34:8443/healthz returned 200:
	ok
	I0229 19:07:09.886045   47608 api_server.go:141] control plane version: v1.28.4
	I0229 19:07:09.886063   47608 api_server.go:131] duration metric: took 4.017895877s to wait for apiserver health ...
	I0229 19:07:09.886071   47608 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 19:07:09.886091   47608 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:07:09.886137   47608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:07:09.940809   47608 cri.go:89] found id: "18f508cd43779fa9b03e93445f4d03abfe5f3e6291dc05cf116f16714d04ec96"
	I0229 19:07:09.940831   47608 cri.go:89] found id: ""
	I0229 19:07:09.940838   47608 logs.go:276] 1 containers: [18f508cd43779fa9b03e93445f4d03abfe5f3e6291dc05cf116f16714d04ec96]
	I0229 19:07:09.940901   47608 ssh_runner.go:195] Run: which crictl
	I0229 19:07:09.945610   47608 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:07:09.945668   47608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:07:09.995270   47608 cri.go:89] found id: "795516eef7b679f1eefaf293eab6b1683326e3fcb3f4e5f848194e4a7ab1566e"
	I0229 19:07:09.995291   47608 cri.go:89] found id: ""
	I0229 19:07:09.995299   47608 logs.go:276] 1 containers: [795516eef7b679f1eefaf293eab6b1683326e3fcb3f4e5f848194e4a7ab1566e]
	I0229 19:07:09.995353   47608 ssh_runner.go:195] Run: which crictl
	I0229 19:07:10.000358   47608 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:07:10.000431   47608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:07:10.052073   47608 cri.go:89] found id: "7220454898e1219d80ac33161ddc4116866d5cf1d1382767c1bf456cfa6b3c72"
	I0229 19:07:10.052094   47608 cri.go:89] found id: ""
	I0229 19:07:10.052103   47608 logs.go:276] 1 containers: [7220454898e1219d80ac33161ddc4116866d5cf1d1382767c1bf456cfa6b3c72]
	I0229 19:07:10.052164   47608 ssh_runner.go:195] Run: which crictl
	I0229 19:07:10.058993   47608 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:07:10.059071   47608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:07:10.110467   47608 cri.go:89] found id: "f1accc151694b40814505327a0743f46796dacf516f08043f58b1c42147e25fe"
	I0229 19:07:10.110494   47608 cri.go:89] found id: ""
	I0229 19:07:10.110501   47608 logs.go:276] 1 containers: [f1accc151694b40814505327a0743f46796dacf516f08043f58b1c42147e25fe]
	I0229 19:07:10.110556   47608 ssh_runner.go:195] Run: which crictl
	I0229 19:07:10.115491   47608 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:07:10.115545   47608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:07:10.159522   47608 cri.go:89] found id: "3327a9756b71a79f82db55da77bd2297066ea76c84094764ff2fcbd04e4f528d"
	I0229 19:07:10.159540   47608 cri.go:89] found id: ""
	I0229 19:07:10.159548   47608 logs.go:276] 1 containers: [3327a9756b71a79f82db55da77bd2297066ea76c84094764ff2fcbd04e4f528d]
	I0229 19:07:10.159602   47608 ssh_runner.go:195] Run: which crictl
	I0229 19:07:10.164162   47608 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:07:10.164223   47608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:07:10.204583   47608 cri.go:89] found id: "9099ab49263e57e41e6b7b28a87e810a951b495e3d0c879bf7ac045d0d2c2bd0"
	I0229 19:07:10.204602   47608 cri.go:89] found id: ""
	I0229 19:07:10.204623   47608 logs.go:276] 1 containers: [9099ab49263e57e41e6b7b28a87e810a951b495e3d0c879bf7ac045d0d2c2bd0]
	I0229 19:07:10.204699   47608 ssh_runner.go:195] Run: which crictl
	I0229 19:07:10.209550   47608 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:07:10.209602   47608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:07:10.246884   47608 cri.go:89] found id: ""
	I0229 19:07:10.246907   47608 logs.go:276] 0 containers: []
	W0229 19:07:10.246915   47608 logs.go:278] No container was found matching "kindnet"
	I0229 19:07:10.246925   47608 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 19:07:10.246970   47608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 19:07:10.142347   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:12.142912   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:10.286397   47608 cri.go:89] found id: "6d4d0c25cc63929278e7e44eb4cd9bc93f376b8bb76a05b78a29d9d9b9794ada"
	I0229 19:07:10.286420   47608 cri.go:89] found id: ""
	I0229 19:07:10.286429   47608 logs.go:276] 1 containers: [6d4d0c25cc63929278e7e44eb4cd9bc93f376b8bb76a05b78a29d9d9b9794ada]
	I0229 19:07:10.286476   47608 ssh_runner.go:195] Run: which crictl
	I0229 19:07:10.292279   47608 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:07:10.292303   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 19:07:10.432648   47608 logs.go:123] Gathering logs for kube-apiserver [18f508cd43779fa9b03e93445f4d03abfe5f3e6291dc05cf116f16714d04ec96] ...
	I0229 19:07:10.432683   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18f508cd43779fa9b03e93445f4d03abfe5f3e6291dc05cf116f16714d04ec96"
	I0229 19:07:10.485438   47608 logs.go:123] Gathering logs for etcd [795516eef7b679f1eefaf293eab6b1683326e3fcb3f4e5f848194e4a7ab1566e] ...
	I0229 19:07:10.485468   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 795516eef7b679f1eefaf293eab6b1683326e3fcb3f4e5f848194e4a7ab1566e"
	I0229 19:07:10.532671   47608 logs.go:123] Gathering logs for coredns [7220454898e1219d80ac33161ddc4116866d5cf1d1382767c1bf456cfa6b3c72] ...
	I0229 19:07:10.532702   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7220454898e1219d80ac33161ddc4116866d5cf1d1382767c1bf456cfa6b3c72"
	I0229 19:07:10.574743   47608 logs.go:123] Gathering logs for kube-scheduler [f1accc151694b40814505327a0743f46796dacf516f08043f58b1c42147e25fe] ...
	I0229 19:07:10.574768   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1accc151694b40814505327a0743f46796dacf516f08043f58b1c42147e25fe"
	I0229 19:07:10.625137   47608 logs.go:123] Gathering logs for kube-proxy [3327a9756b71a79f82db55da77bd2297066ea76c84094764ff2fcbd04e4f528d] ...
	I0229 19:07:10.625164   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3327a9756b71a79f82db55da77bd2297066ea76c84094764ff2fcbd04e4f528d"
	I0229 19:07:10.669432   47608 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:07:10.669457   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:07:11.008876   47608 logs.go:123] Gathering logs for container status ...
	I0229 19:07:11.008906   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:07:11.060752   47608 logs.go:123] Gathering logs for kubelet ...
	I0229 19:07:11.060785   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:07:11.167311   47608 logs.go:123] Gathering logs for dmesg ...
	I0229 19:07:11.167344   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:07:11.185133   47608 logs.go:123] Gathering logs for kube-controller-manager [9099ab49263e57e41e6b7b28a87e810a951b495e3d0c879bf7ac045d0d2c2bd0] ...
	I0229 19:07:11.185160   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9099ab49263e57e41e6b7b28a87e810a951b495e3d0c879bf7ac045d0d2c2bd0"
	I0229 19:07:11.251587   47608 logs.go:123] Gathering logs for storage-provisioner [6d4d0c25cc63929278e7e44eb4cd9bc93f376b8bb76a05b78a29d9d9b9794ada] ...
	I0229 19:07:11.251614   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d4d0c25cc63929278e7e44eb4cd9bc93f376b8bb76a05b78a29d9d9b9794ada"
	I0229 19:07:13.809877   47608 system_pods.go:59] 8 kube-system pods found
	I0229 19:07:13.809904   47608 system_pods.go:61] "coredns-5dd5756b68-nth8z" [eeec9c32-9f61-4cb7-b1fb-3dd75c5af668] Running
	I0229 19:07:13.809910   47608 system_pods.go:61] "etcd-embed-certs-991128" [59422cbb-1dd9-49de-8a33-5722c44673db] Running
	I0229 19:07:13.809915   47608 system_pods.go:61] "kube-apiserver-embed-certs-991128" [7575302f-597d-4ffc-9411-12fa4e1d4e8d] Running
	I0229 19:07:13.809920   47608 system_pods.go:61] "kube-controller-manager-embed-certs-991128" [e9cbc6cc-5910-4807-95dd-3ec88a184ec2] Running
	I0229 19:07:13.809924   47608 system_pods.go:61] "kube-proxy-5grst" [35524449-8c5a-440d-a45f-ce631ebff076] Running
	I0229 19:07:13.809928   47608 system_pods.go:61] "kube-scheduler-embed-certs-991128" [e95aeb48-8783-4620-89e0-7454e9cd251d] Running
	I0229 19:07:13.809937   47608 system_pods.go:61] "metrics-server-57f55c9bc5-r66xw" [8eb63357-6b36-49f3-98a5-c74bb4a9b09c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 19:07:13.809945   47608 system_pods.go:61] "storage-provisioner" [a9ce642e-81dc-4dd7-be8e-3796e19f8f03] Running
	I0229 19:07:13.809957   47608 system_pods.go:74] duration metric: took 3.923878638s to wait for pod list to return data ...
	I0229 19:07:13.809967   47608 default_sa.go:34] waiting for default service account to be created ...
	I0229 19:07:13.814425   47608 default_sa.go:45] found service account: "default"
	I0229 19:07:13.814451   47608 default_sa.go:55] duration metric: took 4.476391ms for default service account to be created ...
	I0229 19:07:13.814463   47608 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 19:07:13.822812   47608 system_pods.go:86] 8 kube-system pods found
	I0229 19:07:13.822834   47608 system_pods.go:89] "coredns-5dd5756b68-nth8z" [eeec9c32-9f61-4cb7-b1fb-3dd75c5af668] Running
	I0229 19:07:13.822842   47608 system_pods.go:89] "etcd-embed-certs-991128" [59422cbb-1dd9-49de-8a33-5722c44673db] Running
	I0229 19:07:13.822849   47608 system_pods.go:89] "kube-apiserver-embed-certs-991128" [7575302f-597d-4ffc-9411-12fa4e1d4e8d] Running
	I0229 19:07:13.822856   47608 system_pods.go:89] "kube-controller-manager-embed-certs-991128" [e9cbc6cc-5910-4807-95dd-3ec88a184ec2] Running
	I0229 19:07:13.822864   47608 system_pods.go:89] "kube-proxy-5grst" [35524449-8c5a-440d-a45f-ce631ebff076] Running
	I0229 19:07:13.822871   47608 system_pods.go:89] "kube-scheduler-embed-certs-991128" [e95aeb48-8783-4620-89e0-7454e9cd251d] Running
	I0229 19:07:13.822883   47608 system_pods.go:89] "metrics-server-57f55c9bc5-r66xw" [8eb63357-6b36-49f3-98a5-c74bb4a9b09c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 19:07:13.822893   47608 system_pods.go:89] "storage-provisioner" [a9ce642e-81dc-4dd7-be8e-3796e19f8f03] Running
	I0229 19:07:13.822908   47608 system_pods.go:126] duration metric: took 8.437411ms to wait for k8s-apps to be running ...
	I0229 19:07:13.822919   47608 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 19:07:13.822973   47608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 19:07:13.841166   47608 system_svc.go:56] duration metric: took 18.240886ms WaitForService to wait for kubelet.
	I0229 19:07:13.841190   47608 kubeadm.go:581] duration metric: took 4m14.94505166s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 19:07:13.841213   47608 node_conditions.go:102] verifying NodePressure condition ...
	I0229 19:07:13.844369   47608 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 19:07:13.844393   47608 node_conditions.go:123] node cpu capacity is 2
	I0229 19:07:13.844404   47608 node_conditions.go:105] duration metric: took 3.186855ms to run NodePressure ...
	I0229 19:07:13.844416   47608 start.go:228] waiting for startup goroutines ...
	I0229 19:07:13.844425   47608 start.go:233] waiting for cluster config update ...
	I0229 19:07:13.844438   47608 start.go:242] writing updated cluster config ...
	I0229 19:07:13.844737   47608 ssh_runner.go:195] Run: rm -f paused
	I0229 19:07:13.894129   47608 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0229 19:07:13.896056   47608 out.go:177] * Done! kubectl is now configured to use "embed-certs-991128" cluster and "default" namespace by default
	I0229 19:07:10.367615   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:12.866425   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:14.145357   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:16.642943   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:14.867561   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:17.366556   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:19.143410   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:21.147970   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:19.367285   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:21.865048   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:23.868674   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:23.643039   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:25.643205   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:27.643525   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:25.869656   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:28.369270   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:30.142250   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:32.142304   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:30.865630   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:32.870509   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:34.143254   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:36.645374   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:35.367229   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:37.865920   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:38.646004   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:41.146450   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:40.368452   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:42.866110   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:43.643363   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:45.643443   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:47.644208   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:44.868350   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:45.865595   48088 pod_ready.go:81] duration metric: took 4m0.007156363s waiting for pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace to be "Ready" ...
	E0229 19:07:45.865618   48088 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0229 19:07:45.865628   48088 pod_ready.go:38] duration metric: took 4m1.182191329s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 19:07:45.865647   48088 api_server.go:52] waiting for apiserver process to appear ...
	I0229 19:07:45.865681   48088 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:07:45.865737   48088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:07:45.924104   48088 cri.go:89] found id: "afb68f5e908cebe734b371147974874028af491c708e5cb5198d1f84d7c1a1ec"
	I0229 19:07:45.924127   48088 cri.go:89] found id: ""
	I0229 19:07:45.924136   48088 logs.go:276] 1 containers: [afb68f5e908cebe734b371147974874028af491c708e5cb5198d1f84d7c1a1ec]
	I0229 19:07:45.924194   48088 ssh_runner.go:195] Run: which crictl
	I0229 19:07:45.929769   48088 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:07:45.929823   48088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:07:45.973018   48088 cri.go:89] found id: "ea63327422de95ad071cbb5dd8bd32fb4b052a80e014485ef75bf233da53bebf"
	I0229 19:07:45.973039   48088 cri.go:89] found id: ""
	I0229 19:07:45.973048   48088 logs.go:276] 1 containers: [ea63327422de95ad071cbb5dd8bd32fb4b052a80e014485ef75bf233da53bebf]
	I0229 19:07:45.973102   48088 ssh_runner.go:195] Run: which crictl
	I0229 19:07:45.978222   48088 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:07:45.978284   48088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:07:46.019965   48088 cri.go:89] found id: "f3783ae6a7523e08992c41219eb196d6d50ec4a3033a63f5ac801053aac04cc3"
	I0229 19:07:46.019984   48088 cri.go:89] found id: ""
	I0229 19:07:46.019991   48088 logs.go:276] 1 containers: [f3783ae6a7523e08992c41219eb196d6d50ec4a3033a63f5ac801053aac04cc3]
	I0229 19:07:46.020046   48088 ssh_runner.go:195] Run: which crictl
	I0229 19:07:46.024852   48088 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:07:46.024909   48088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:07:46.067904   48088 cri.go:89] found id: "7ad8f5f1b340cedf651a9032d3eb4e30640730f4284611f933e96f4ecf76ecff"
	I0229 19:07:46.067921   48088 cri.go:89] found id: ""
	I0229 19:07:46.067928   48088 logs.go:276] 1 containers: [7ad8f5f1b340cedf651a9032d3eb4e30640730f4284611f933e96f4ecf76ecff]
	I0229 19:07:46.067970   48088 ssh_runner.go:195] Run: which crictl
	I0229 19:07:46.073790   48088 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:07:46.073855   48088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:07:46.113273   48088 cri.go:89] found id: "66a474fccaab4249c196a12339ffecf4b3a4a079d53db11cc6f1b249574da09f"
	I0229 19:07:46.113299   48088 cri.go:89] found id: ""
	I0229 19:07:46.113320   48088 logs.go:276] 1 containers: [66a474fccaab4249c196a12339ffecf4b3a4a079d53db11cc6f1b249574da09f]
	I0229 19:07:46.113375   48088 ssh_runner.go:195] Run: which crictl
	I0229 19:07:46.118626   48088 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:07:46.118692   48088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:07:46.169986   48088 cri.go:89] found id: "f9076d6488b1c81a82f5384182b068e3cd06217027b2030717efdbf4b6df76a3"
	I0229 19:07:46.170008   48088 cri.go:89] found id: ""
	I0229 19:07:46.170017   48088 logs.go:276] 1 containers: [f9076d6488b1c81a82f5384182b068e3cd06217027b2030717efdbf4b6df76a3]
	I0229 19:07:46.170065   48088 ssh_runner.go:195] Run: which crictl
	I0229 19:07:46.175639   48088 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:07:46.175699   48088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:07:46.220353   48088 cri.go:89] found id: ""
	I0229 19:07:46.220383   48088 logs.go:276] 0 containers: []
	W0229 19:07:46.220394   48088 logs.go:278] No container was found matching "kindnet"
	I0229 19:07:46.220402   48088 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 19:07:46.220460   48088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 19:07:46.267009   48088 cri.go:89] found id: "dd100a6a78ff38195f8c50e042b5378c2cd6e061b69c596fd75b45030c7abb3f"
	I0229 19:07:46.267045   48088 cri.go:89] found id: ""
	I0229 19:07:46.267055   48088 logs.go:276] 1 containers: [dd100a6a78ff38195f8c50e042b5378c2cd6e061b69c596fd75b45030c7abb3f]
	I0229 19:07:46.267105   48088 ssh_runner.go:195] Run: which crictl
	I0229 19:07:46.272422   48088 logs.go:123] Gathering logs for kube-controller-manager [f9076d6488b1c81a82f5384182b068e3cd06217027b2030717efdbf4b6df76a3] ...
	I0229 19:07:46.272445   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9076d6488b1c81a82f5384182b068e3cd06217027b2030717efdbf4b6df76a3"
	I0229 19:07:46.337524   48088 logs.go:123] Gathering logs for kubelet ...
	I0229 19:07:46.337554   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:07:46.454444   48088 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:07:46.454484   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 19:07:46.601211   48088 logs.go:123] Gathering logs for kube-apiserver [afb68f5e908cebe734b371147974874028af491c708e5cb5198d1f84d7c1a1ec] ...
	I0229 19:07:46.601239   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 afb68f5e908cebe734b371147974874028af491c708e5cb5198d1f84d7c1a1ec"
	I0229 19:07:46.661763   48088 logs.go:123] Gathering logs for coredns [f3783ae6a7523e08992c41219eb196d6d50ec4a3033a63f5ac801053aac04cc3] ...
	I0229 19:07:46.661794   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f3783ae6a7523e08992c41219eb196d6d50ec4a3033a63f5ac801053aac04cc3"
	I0229 19:07:46.707569   48088 logs.go:123] Gathering logs for kube-scheduler [7ad8f5f1b340cedf651a9032d3eb4e30640730f4284611f933e96f4ecf76ecff] ...
	I0229 19:07:46.707594   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7ad8f5f1b340cedf651a9032d3eb4e30640730f4284611f933e96f4ecf76ecff"
	I0229 19:07:46.774076   48088 logs.go:123] Gathering logs for kube-proxy [66a474fccaab4249c196a12339ffecf4b3a4a079d53db11cc6f1b249574da09f] ...
	I0229 19:07:46.774107   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66a474fccaab4249c196a12339ffecf4b3a4a079d53db11cc6f1b249574da09f"
	I0229 19:07:46.821259   48088 logs.go:123] Gathering logs for dmesg ...
	I0229 19:07:46.821288   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:07:46.837496   48088 logs.go:123] Gathering logs for etcd [ea63327422de95ad071cbb5dd8bd32fb4b052a80e014485ef75bf233da53bebf] ...
	I0229 19:07:46.837519   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea63327422de95ad071cbb5dd8bd32fb4b052a80e014485ef75bf233da53bebf"
	I0229 19:07:46.890812   48088 logs.go:123] Gathering logs for storage-provisioner [dd100a6a78ff38195f8c50e042b5378c2cd6e061b69c596fd75b45030c7abb3f] ...
	I0229 19:07:46.890841   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dd100a6a78ff38195f8c50e042b5378c2cd6e061b69c596fd75b45030c7abb3f"
	I0229 19:07:46.934532   48088 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:07:46.934559   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:07:47.395235   48088 logs.go:123] Gathering logs for container status ...
	I0229 19:07:47.395269   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:07:50.144146   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:52.144673   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:49.959190   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:07:49.978381   48088 api_server.go:72] duration metric: took 4m7.681437754s to wait for apiserver process to appear ...
	I0229 19:07:49.978407   48088 api_server.go:88] waiting for apiserver healthz status ...
	I0229 19:07:49.978464   48088 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:07:49.978513   48088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:07:50.028150   48088 cri.go:89] found id: "afb68f5e908cebe734b371147974874028af491c708e5cb5198d1f84d7c1a1ec"
	I0229 19:07:50.028176   48088 cri.go:89] found id: ""
	I0229 19:07:50.028186   48088 logs.go:276] 1 containers: [afb68f5e908cebe734b371147974874028af491c708e5cb5198d1f84d7c1a1ec]
	I0229 19:07:50.028242   48088 ssh_runner.go:195] Run: which crictl
	I0229 19:07:50.033649   48088 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:07:50.033719   48088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:07:50.083761   48088 cri.go:89] found id: "ea63327422de95ad071cbb5dd8bd32fb4b052a80e014485ef75bf233da53bebf"
	I0229 19:07:50.083785   48088 cri.go:89] found id: ""
	I0229 19:07:50.083795   48088 logs.go:276] 1 containers: [ea63327422de95ad071cbb5dd8bd32fb4b052a80e014485ef75bf233da53bebf]
	I0229 19:07:50.083866   48088 ssh_runner.go:195] Run: which crictl
	I0229 19:07:50.088829   48088 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:07:50.088913   48088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:07:50.138098   48088 cri.go:89] found id: "f3783ae6a7523e08992c41219eb196d6d50ec4a3033a63f5ac801053aac04cc3"
	I0229 19:07:50.138120   48088 cri.go:89] found id: ""
	I0229 19:07:50.138148   48088 logs.go:276] 1 containers: [f3783ae6a7523e08992c41219eb196d6d50ec4a3033a63f5ac801053aac04cc3]
	I0229 19:07:50.138203   48088 ssh_runner.go:195] Run: which crictl
	I0229 19:07:50.143751   48088 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:07:50.143824   48088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:07:50.181953   48088 cri.go:89] found id: "7ad8f5f1b340cedf651a9032d3eb4e30640730f4284611f933e96f4ecf76ecff"
	I0229 19:07:50.181973   48088 cri.go:89] found id: ""
	I0229 19:07:50.182005   48088 logs.go:276] 1 containers: [7ad8f5f1b340cedf651a9032d3eb4e30640730f4284611f933e96f4ecf76ecff]
	I0229 19:07:50.182061   48088 ssh_runner.go:195] Run: which crictl
	I0229 19:07:50.187673   48088 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:07:50.187738   48088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:07:50.239764   48088 cri.go:89] found id: "66a474fccaab4249c196a12339ffecf4b3a4a079d53db11cc6f1b249574da09f"
	I0229 19:07:50.239787   48088 cri.go:89] found id: ""
	I0229 19:07:50.239797   48088 logs.go:276] 1 containers: [66a474fccaab4249c196a12339ffecf4b3a4a079d53db11cc6f1b249574da09f]
	I0229 19:07:50.239945   48088 ssh_runner.go:195] Run: which crictl
	I0229 19:07:50.244916   48088 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:07:50.244980   48088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:07:50.285741   48088 cri.go:89] found id: "f9076d6488b1c81a82f5384182b068e3cd06217027b2030717efdbf4b6df76a3"
	I0229 19:07:50.285764   48088 cri.go:89] found id: ""
	I0229 19:07:50.285774   48088 logs.go:276] 1 containers: [f9076d6488b1c81a82f5384182b068e3cd06217027b2030717efdbf4b6df76a3]
	I0229 19:07:50.285833   48088 ssh_runner.go:195] Run: which crictl
	I0229 19:07:50.290537   48088 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:07:50.290607   48088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:07:50.334081   48088 cri.go:89] found id: ""
	I0229 19:07:50.334113   48088 logs.go:276] 0 containers: []
	W0229 19:07:50.334125   48088 logs.go:278] No container was found matching "kindnet"
	I0229 19:07:50.334133   48088 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 19:07:50.334218   48088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 19:07:50.382210   48088 cri.go:89] found id: "dd100a6a78ff38195f8c50e042b5378c2cd6e061b69c596fd75b45030c7abb3f"
	I0229 19:07:50.382240   48088 cri.go:89] found id: ""
	I0229 19:07:50.382249   48088 logs.go:276] 1 containers: [dd100a6a78ff38195f8c50e042b5378c2cd6e061b69c596fd75b45030c7abb3f]
	I0229 19:07:50.382309   48088 ssh_runner.go:195] Run: which crictl
	I0229 19:07:50.387638   48088 logs.go:123] Gathering logs for dmesg ...
	I0229 19:07:50.387659   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:07:50.402846   48088 logs.go:123] Gathering logs for kube-proxy [66a474fccaab4249c196a12339ffecf4b3a4a079d53db11cc6f1b249574da09f] ...
	I0229 19:07:50.402871   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66a474fccaab4249c196a12339ffecf4b3a4a079d53db11cc6f1b249574da09f"
	I0229 19:07:50.449452   48088 logs.go:123] Gathering logs for kube-controller-manager [f9076d6488b1c81a82f5384182b068e3cd06217027b2030717efdbf4b6df76a3] ...
	I0229 19:07:50.449484   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9076d6488b1c81a82f5384182b068e3cd06217027b2030717efdbf4b6df76a3"
	I0229 19:07:50.503887   48088 logs.go:123] Gathering logs for storage-provisioner [dd100a6a78ff38195f8c50e042b5378c2cd6e061b69c596fd75b45030c7abb3f] ...
	I0229 19:07:50.503921   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dd100a6a78ff38195f8c50e042b5378c2cd6e061b69c596fd75b45030c7abb3f"
	I0229 19:07:50.545549   48088 logs.go:123] Gathering logs for container status ...
	I0229 19:07:50.545620   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:07:50.607117   48088 logs.go:123] Gathering logs for kubelet ...
	I0229 19:07:50.607144   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:07:50.711241   48088 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:07:50.711302   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 19:07:50.857588   48088 logs.go:123] Gathering logs for kube-apiserver [afb68f5e908cebe734b371147974874028af491c708e5cb5198d1f84d7c1a1ec] ...
	I0229 19:07:50.857622   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 afb68f5e908cebe734b371147974874028af491c708e5cb5198d1f84d7c1a1ec"
	I0229 19:07:50.912908   48088 logs.go:123] Gathering logs for etcd [ea63327422de95ad071cbb5dd8bd32fb4b052a80e014485ef75bf233da53bebf] ...
	I0229 19:07:50.912943   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea63327422de95ad071cbb5dd8bd32fb4b052a80e014485ef75bf233da53bebf"
	I0229 19:07:50.958888   48088 logs.go:123] Gathering logs for coredns [f3783ae6a7523e08992c41219eb196d6d50ec4a3033a63f5ac801053aac04cc3] ...
	I0229 19:07:50.958918   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f3783ae6a7523e08992c41219eb196d6d50ec4a3033a63f5ac801053aac04cc3"
	I0229 19:07:51.008029   48088 logs.go:123] Gathering logs for kube-scheduler [7ad8f5f1b340cedf651a9032d3eb4e30640730f4284611f933e96f4ecf76ecff] ...
	I0229 19:07:51.008059   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7ad8f5f1b340cedf651a9032d3eb4e30640730f4284611f933e96f4ecf76ecff"
	I0229 19:07:51.064227   48088 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:07:51.064262   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:07:53.940284   48088 api_server.go:253] Checking apiserver healthz at https://192.168.39.210:8444/healthz ...
	I0229 19:07:53.945473   48088 api_server.go:279] https://192.168.39.210:8444/healthz returned 200:
	ok
	I0229 19:07:53.946909   48088 api_server.go:141] control plane version: v1.28.4
	I0229 19:07:53.946925   48088 api_server.go:131] duration metric: took 3.968511547s to wait for apiserver health ...
	I0229 19:07:53.946938   48088 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 19:07:53.946958   48088 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:07:53.947009   48088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:07:53.996337   48088 cri.go:89] found id: "afb68f5e908cebe734b371147974874028af491c708e5cb5198d1f84d7c1a1ec"
	I0229 19:07:53.996357   48088 cri.go:89] found id: ""
	I0229 19:07:53.996364   48088 logs.go:276] 1 containers: [afb68f5e908cebe734b371147974874028af491c708e5cb5198d1f84d7c1a1ec]
	I0229 19:07:53.996409   48088 ssh_runner.go:195] Run: which crictl
	I0229 19:07:54.001386   48088 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:07:54.001465   48088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:07:54.051794   48088 cri.go:89] found id: "ea63327422de95ad071cbb5dd8bd32fb4b052a80e014485ef75bf233da53bebf"
	I0229 19:07:54.051814   48088 cri.go:89] found id: ""
	I0229 19:07:54.051821   48088 logs.go:276] 1 containers: [ea63327422de95ad071cbb5dd8bd32fb4b052a80e014485ef75bf233da53bebf]
	I0229 19:07:54.051869   48088 ssh_runner.go:195] Run: which crictl
	I0229 19:07:54.057560   48088 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:07:54.057631   48088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:07:54.110088   48088 cri.go:89] found id: "f3783ae6a7523e08992c41219eb196d6d50ec4a3033a63f5ac801053aac04cc3"
	I0229 19:07:54.110105   48088 cri.go:89] found id: ""
	I0229 19:07:54.110113   48088 logs.go:276] 1 containers: [f3783ae6a7523e08992c41219eb196d6d50ec4a3033a63f5ac801053aac04cc3]
	I0229 19:07:54.110156   48088 ssh_runner.go:195] Run: which crictl
	I0229 19:07:54.115737   48088 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:07:54.115800   48088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:07:54.162820   48088 cri.go:89] found id: "7ad8f5f1b340cedf651a9032d3eb4e30640730f4284611f933e96f4ecf76ecff"
	I0229 19:07:54.162842   48088 cri.go:89] found id: ""
	I0229 19:07:54.162850   48088 logs.go:276] 1 containers: [7ad8f5f1b340cedf651a9032d3eb4e30640730f4284611f933e96f4ecf76ecff]
	I0229 19:07:54.162899   48088 ssh_runner.go:195] Run: which crictl
	I0229 19:07:54.168740   48088 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:07:54.168795   48088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:07:54.210577   48088 cri.go:89] found id: "66a474fccaab4249c196a12339ffecf4b3a4a079d53db11cc6f1b249574da09f"
	I0229 19:07:54.210617   48088 cri.go:89] found id: ""
	I0229 19:07:54.210625   48088 logs.go:276] 1 containers: [66a474fccaab4249c196a12339ffecf4b3a4a079d53db11cc6f1b249574da09f]
	I0229 19:07:54.210673   48088 ssh_runner.go:195] Run: which crictl
	I0229 19:07:54.216266   48088 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:07:54.216317   48088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:07:54.255416   48088 cri.go:89] found id: "f9076d6488b1c81a82f5384182b068e3cd06217027b2030717efdbf4b6df76a3"
	I0229 19:07:54.255442   48088 cri.go:89] found id: ""
	I0229 19:07:54.255451   48088 logs.go:276] 1 containers: [f9076d6488b1c81a82f5384182b068e3cd06217027b2030717efdbf4b6df76a3]
	I0229 19:07:54.255511   48088 ssh_runner.go:195] Run: which crictl
	I0229 19:07:54.260522   48088 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:07:54.260585   48088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:07:54.645279   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:57.144190   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:54.309825   48088 cri.go:89] found id: ""
	I0229 19:07:54.309861   48088 logs.go:276] 0 containers: []
	W0229 19:07:54.309873   48088 logs.go:278] No container was found matching "kindnet"
	I0229 19:07:54.309881   48088 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 19:07:54.309950   48088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 19:07:54.353200   48088 cri.go:89] found id: "dd100a6a78ff38195f8c50e042b5378c2cd6e061b69c596fd75b45030c7abb3f"
	I0229 19:07:54.353219   48088 cri.go:89] found id: ""
	I0229 19:07:54.353225   48088 logs.go:276] 1 containers: [dd100a6a78ff38195f8c50e042b5378c2cd6e061b69c596fd75b45030c7abb3f]
	I0229 19:07:54.353278   48088 ssh_runner.go:195] Run: which crictl
	I0229 19:07:54.357943   48088 logs.go:123] Gathering logs for kubelet ...
	I0229 19:07:54.357965   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:07:54.456867   48088 logs.go:123] Gathering logs for dmesg ...
	I0229 19:07:54.456901   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:07:54.474633   48088 logs.go:123] Gathering logs for kube-apiserver [afb68f5e908cebe734b371147974874028af491c708e5cb5198d1f84d7c1a1ec] ...
	I0229 19:07:54.474659   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 afb68f5e908cebe734b371147974874028af491c708e5cb5198d1f84d7c1a1ec"
	I0229 19:07:54.538218   48088 logs.go:123] Gathering logs for etcd [ea63327422de95ad071cbb5dd8bd32fb4b052a80e014485ef75bf233da53bebf] ...
	I0229 19:07:54.538256   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea63327422de95ad071cbb5dd8bd32fb4b052a80e014485ef75bf233da53bebf"
	I0229 19:07:54.591570   48088 logs.go:123] Gathering logs for container status ...
	I0229 19:07:54.591607   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:07:54.643603   48088 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:07:54.643638   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 19:07:54.787255   48088 logs.go:123] Gathering logs for coredns [f3783ae6a7523e08992c41219eb196d6d50ec4a3033a63f5ac801053aac04cc3] ...
	I0229 19:07:54.787284   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f3783ae6a7523e08992c41219eb196d6d50ec4a3033a63f5ac801053aac04cc3"
	I0229 19:07:54.836816   48088 logs.go:123] Gathering logs for kube-scheduler [7ad8f5f1b340cedf651a9032d3eb4e30640730f4284611f933e96f4ecf76ecff] ...
	I0229 19:07:54.836840   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7ad8f5f1b340cedf651a9032d3eb4e30640730f4284611f933e96f4ecf76ecff"
	I0229 19:07:54.888605   48088 logs.go:123] Gathering logs for kube-proxy [66a474fccaab4249c196a12339ffecf4b3a4a079d53db11cc6f1b249574da09f] ...
	I0229 19:07:54.888635   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66a474fccaab4249c196a12339ffecf4b3a4a079d53db11cc6f1b249574da09f"
	I0229 19:07:54.930913   48088 logs.go:123] Gathering logs for kube-controller-manager [f9076d6488b1c81a82f5384182b068e3cd06217027b2030717efdbf4b6df76a3] ...
	I0229 19:07:54.930942   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9076d6488b1c81a82f5384182b068e3cd06217027b2030717efdbf4b6df76a3"
	I0229 19:07:54.996868   48088 logs.go:123] Gathering logs for storage-provisioner [dd100a6a78ff38195f8c50e042b5378c2cd6e061b69c596fd75b45030c7abb3f] ...
	I0229 19:07:54.996904   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dd100a6a78ff38195f8c50e042b5378c2cd6e061b69c596fd75b45030c7abb3f"
	I0229 19:07:55.038936   48088 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:07:55.038975   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:07:57.896563   48088 system_pods.go:59] 8 kube-system pods found
	I0229 19:07:57.896600   48088 system_pods.go:61] "coredns-5dd5756b68-fmptg" [ac14ccc5-53fb-41c6-b09a-bdb801f91088] Running
	I0229 19:07:57.896607   48088 system_pods.go:61] "etcd-default-k8s-diff-port-153528" [e06d7f20-0cb4-4560-a746-eae5f366e442] Running
	I0229 19:07:57.896612   48088 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-153528" [1611b07c-d0ca-43c4-81ba-fc7c75b64a01] Running
	I0229 19:07:57.896617   48088 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-153528" [15cdd7c0-b9d9-456e-92ad-9c4de12c53df] Running
	I0229 19:07:57.896621   48088 system_pods.go:61] "kube-proxy-bvrxx" [b826c147-0486-405d-95c7-9b029349e27c] Running
	I0229 19:07:57.896625   48088 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-153528" [c08cb0c5-88da-41ea-982a-1a61e3c24107] Running
	I0229 19:07:57.896633   48088 system_pods.go:61] "metrics-server-57f55c9bc5-v95ws" [e3545189-e705-4d6e-bab6-e1eceba83c0f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 19:07:57.896641   48088 system_pods.go:61] "storage-provisioner" [0525367f-c4e1-4d3e-945b-69f408e9fcb0] Running
	I0229 19:07:57.896650   48088 system_pods.go:74] duration metric: took 3.949706328s to wait for pod list to return data ...
	I0229 19:07:57.896661   48088 default_sa.go:34] waiting for default service account to be created ...
	I0229 19:07:57.899954   48088 default_sa.go:45] found service account: "default"
	I0229 19:07:57.899982   48088 default_sa.go:55] duration metric: took 3.312049ms for default service account to be created ...
	I0229 19:07:57.899994   48088 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 19:07:57.906500   48088 system_pods.go:86] 8 kube-system pods found
	I0229 19:07:57.906535   48088 system_pods.go:89] "coredns-5dd5756b68-fmptg" [ac14ccc5-53fb-41c6-b09a-bdb801f91088] Running
	I0229 19:07:57.906545   48088 system_pods.go:89] "etcd-default-k8s-diff-port-153528" [e06d7f20-0cb4-4560-a746-eae5f366e442] Running
	I0229 19:07:57.906552   48088 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-153528" [1611b07c-d0ca-43c4-81ba-fc7c75b64a01] Running
	I0229 19:07:57.906560   48088 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-153528" [15cdd7c0-b9d9-456e-92ad-9c4de12c53df] Running
	I0229 19:07:57.906566   48088 system_pods.go:89] "kube-proxy-bvrxx" [b826c147-0486-405d-95c7-9b029349e27c] Running
	I0229 19:07:57.906572   48088 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-153528" [c08cb0c5-88da-41ea-982a-1a61e3c24107] Running
	I0229 19:07:57.906584   48088 system_pods.go:89] "metrics-server-57f55c9bc5-v95ws" [e3545189-e705-4d6e-bab6-e1eceba83c0f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 19:07:57.906599   48088 system_pods.go:89] "storage-provisioner" [0525367f-c4e1-4d3e-945b-69f408e9fcb0] Running
	I0229 19:07:57.906611   48088 system_pods.go:126] duration metric: took 6.610073ms to wait for k8s-apps to be running ...
	I0229 19:07:57.906624   48088 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 19:07:57.906684   48088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 19:07:57.928757   48088 system_svc.go:56] duration metric: took 22.126375ms WaitForService to wait for kubelet.
	I0229 19:07:57.928784   48088 kubeadm.go:581] duration metric: took 4m15.631847215s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 19:07:57.928802   48088 node_conditions.go:102] verifying NodePressure condition ...
	I0229 19:07:57.932654   48088 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 19:07:57.932673   48088 node_conditions.go:123] node cpu capacity is 2
	I0229 19:07:57.932683   48088 node_conditions.go:105] duration metric: took 3.87689ms to run NodePressure ...
	I0229 19:07:57.932693   48088 start.go:228] waiting for startup goroutines ...
	I0229 19:07:57.932700   48088 start.go:233] waiting for cluster config update ...
	I0229 19:07:57.932711   48088 start.go:242] writing updated cluster config ...
	I0229 19:07:57.932956   48088 ssh_runner.go:195] Run: rm -f paused
	I0229 19:07:57.982872   48088 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0229 19:07:57.984759   48088 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-153528" cluster and "default" namespace by default
	I0229 19:07:59.144395   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:08:01.643273   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:08:04.142449   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:08:06.145652   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:08:08.644566   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:08:11.144108   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:08:13.147164   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:08:15.646715   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:08:18.143168   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:08:20.643045   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:08:22.644969   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:08:23.142859   47515 pod_ready.go:81] duration metric: took 4m0.007407175s waiting for pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace to be "Ready" ...
	E0229 19:08:23.142882   47515 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0229 19:08:23.142892   47515 pod_ready.go:38] duration metric: took 4m1.191734178s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 19:08:23.142918   47515 api_server.go:52] waiting for apiserver process to appear ...
	I0229 19:08:23.142959   47515 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:08:23.143015   47515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:08:23.200836   47515 cri.go:89] found id: "730a369e2636f53e3085f418ca1fe59278c5988e5639018d26088db3e0d29a9a"
	I0229 19:08:23.200855   47515 cri.go:89] found id: "6edf3acff7dee1fec0aa396e614c8d822a2a7e2074e2da8863c3427cda994799"
	I0229 19:08:23.200861   47515 cri.go:89] found id: ""
	I0229 19:08:23.200868   47515 logs.go:276] 2 containers: [730a369e2636f53e3085f418ca1fe59278c5988e5639018d26088db3e0d29a9a 6edf3acff7dee1fec0aa396e614c8d822a2a7e2074e2da8863c3427cda994799]
	I0229 19:08:23.200925   47515 ssh_runner.go:195] Run: which crictl
	I0229 19:08:23.206581   47515 ssh_runner.go:195] Run: which crictl
	I0229 19:08:23.211810   47515 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:08:23.211873   47515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:08:23.257499   47515 cri.go:89] found id: "3e058bfecc2b86d04151528b3bc3ea0c29858a4d4c2af94ff5e8cea26dd9438c"
	I0229 19:08:23.257518   47515 cri.go:89] found id: ""
	I0229 19:08:23.257526   47515 logs.go:276] 1 containers: [3e058bfecc2b86d04151528b3bc3ea0c29858a4d4c2af94ff5e8cea26dd9438c]
	I0229 19:08:23.257568   47515 ssh_runner.go:195] Run: which crictl
	I0229 19:08:23.262794   47515 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:08:23.262858   47515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:08:23.314356   47515 cri.go:89] found id: "d8cab5559badab1abf69f6d38522d0231303c529f2b78b96d073ce6f1117da43"
	I0229 19:08:23.314379   47515 cri.go:89] found id: ""
	I0229 19:08:23.314389   47515 logs.go:276] 1 containers: [d8cab5559badab1abf69f6d38522d0231303c529f2b78b96d073ce6f1117da43]
	I0229 19:08:23.314433   47515 ssh_runner.go:195] Run: which crictl
	I0229 19:08:23.319774   47515 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:08:23.319828   47515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:08:23.363724   47515 cri.go:89] found id: "2c68222b7809e6dde0a01964d3dcac0dd1789c074f0b42457a8d52ddf09a616a"
	I0229 19:08:23.363746   47515 cri.go:89] found id: ""
	I0229 19:08:23.363753   47515 logs.go:276] 1 containers: [2c68222b7809e6dde0a01964d3dcac0dd1789c074f0b42457a8d52ddf09a616a]
	I0229 19:08:23.363798   47515 ssh_runner.go:195] Run: which crictl
	I0229 19:08:23.368994   47515 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:08:23.369044   47515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:08:23.410298   47515 cri.go:89] found id: "ecdd7783c174633c8e5cbf63bbf2bd9a803b88800ec269f503d1974990869365"
	I0229 19:08:23.410317   47515 cri.go:89] found id: ""
	I0229 19:08:23.410323   47515 logs.go:276] 1 containers: [ecdd7783c174633c8e5cbf63bbf2bd9a803b88800ec269f503d1974990869365]
	I0229 19:08:23.410375   47515 ssh_runner.go:195] Run: which crictl
	I0229 19:08:23.416866   47515 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:08:23.416941   47515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:08:23.460286   47515 cri.go:89] found id: "9661e52ccd784fecb3c7d46bee0dd7f089a63bf8729d5990840d79192dc01b35"
	I0229 19:08:23.460313   47515 cri.go:89] found id: ""
	I0229 19:08:23.460323   47515 logs.go:276] 1 containers: [9661e52ccd784fecb3c7d46bee0dd7f089a63bf8729d5990840d79192dc01b35]
	I0229 19:08:23.460378   47515 ssh_runner.go:195] Run: which crictl
	I0229 19:08:23.467279   47515 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:08:23.467343   47515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:08:23.505758   47515 cri.go:89] found id: ""
	I0229 19:08:23.505790   47515 logs.go:276] 0 containers: []
	W0229 19:08:23.505801   47515 logs.go:278] No container was found matching "kindnet"
	I0229 19:08:23.505808   47515 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 19:08:23.505870   47515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 19:08:23.545547   47515 cri.go:89] found id: "c77d304aa104bed3e19645c4702145c26a9b0c8580d21f085e5eac06fa73ca2c"
	I0229 19:08:23.545573   47515 cri.go:89] found id: ""
	I0229 19:08:23.545581   47515 logs.go:276] 1 containers: [c77d304aa104bed3e19645c4702145c26a9b0c8580d21f085e5eac06fa73ca2c]
	I0229 19:08:23.545642   47515 ssh_runner.go:195] Run: which crictl
	I0229 19:08:23.550632   47515 logs.go:123] Gathering logs for kube-apiserver [730a369e2636f53e3085f418ca1fe59278c5988e5639018d26088db3e0d29a9a] ...
	I0229 19:08:23.550652   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 730a369e2636f53e3085f418ca1fe59278c5988e5639018d26088db3e0d29a9a"
	I0229 19:08:23.613033   47515 logs.go:123] Gathering logs for etcd [3e058bfecc2b86d04151528b3bc3ea0c29858a4d4c2af94ff5e8cea26dd9438c] ...
	I0229 19:08:23.613072   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e058bfecc2b86d04151528b3bc3ea0c29858a4d4c2af94ff5e8cea26dd9438c"
	I0229 19:08:23.664593   47515 logs.go:123] Gathering logs for kube-scheduler [2c68222b7809e6dde0a01964d3dcac0dd1789c074f0b42457a8d52ddf09a616a] ...
	I0229 19:08:23.664623   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c68222b7809e6dde0a01964d3dcac0dd1789c074f0b42457a8d52ddf09a616a"
	I0229 19:08:23.723282   47515 logs.go:123] Gathering logs for storage-provisioner [c77d304aa104bed3e19645c4702145c26a9b0c8580d21f085e5eac06fa73ca2c] ...
	I0229 19:08:23.723311   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c77d304aa104bed3e19645c4702145c26a9b0c8580d21f085e5eac06fa73ca2c"
	I0229 19:08:23.764629   47515 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:08:23.764655   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:08:24.254240   47515 logs.go:123] Gathering logs for container status ...
	I0229 19:08:24.254271   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:08:24.321241   47515 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:08:24.321267   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 19:08:24.472841   47515 logs.go:123] Gathering logs for dmesg ...
	I0229 19:08:24.472870   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:08:24.492953   47515 logs.go:123] Gathering logs for kube-apiserver [6edf3acff7dee1fec0aa396e614c8d822a2a7e2074e2da8863c3427cda994799] ...
	I0229 19:08:24.492987   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6edf3acff7dee1fec0aa396e614c8d822a2a7e2074e2da8863c3427cda994799"
	I0229 19:08:24.603910   47515 logs.go:123] Gathering logs for coredns [d8cab5559badab1abf69f6d38522d0231303c529f2b78b96d073ce6f1117da43] ...
	I0229 19:08:24.603952   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8cab5559badab1abf69f6d38522d0231303c529f2b78b96d073ce6f1117da43"
	I0229 19:08:24.651625   47515 logs.go:123] Gathering logs for kube-proxy [ecdd7783c174633c8e5cbf63bbf2bd9a803b88800ec269f503d1974990869365] ...
	I0229 19:08:24.651653   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecdd7783c174633c8e5cbf63bbf2bd9a803b88800ec269f503d1974990869365"
	I0229 19:08:24.693482   47515 logs.go:123] Gathering logs for kube-controller-manager [9661e52ccd784fecb3c7d46bee0dd7f089a63bf8729d5990840d79192dc01b35] ...
	I0229 19:08:24.693508   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9661e52ccd784fecb3c7d46bee0dd7f089a63bf8729d5990840d79192dc01b35"
	I0229 19:08:24.746081   47515 logs.go:123] Gathering logs for kubelet ...
	I0229 19:08:24.746111   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:08:27.342960   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:08:27.361722   47515 api_server.go:72] duration metric: took 4m6.978456788s to wait for apiserver process to appear ...
	I0229 19:08:27.361756   47515 api_server.go:88] waiting for apiserver healthz status ...
	I0229 19:08:27.361795   47515 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:08:27.361850   47515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:08:27.404496   47515 cri.go:89] found id: "730a369e2636f53e3085f418ca1fe59278c5988e5639018d26088db3e0d29a9a"
	I0229 19:08:27.404525   47515 cri.go:89] found id: "6edf3acff7dee1fec0aa396e614c8d822a2a7e2074e2da8863c3427cda994799"
	I0229 19:08:27.404530   47515 cri.go:89] found id: ""
	I0229 19:08:27.404538   47515 logs.go:276] 2 containers: [730a369e2636f53e3085f418ca1fe59278c5988e5639018d26088db3e0d29a9a 6edf3acff7dee1fec0aa396e614c8d822a2a7e2074e2da8863c3427cda994799]
	I0229 19:08:27.404598   47515 ssh_runner.go:195] Run: which crictl
	I0229 19:08:27.409339   47515 ssh_runner.go:195] Run: which crictl
	I0229 19:08:27.413757   47515 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:08:27.413814   47515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:08:27.456993   47515 cri.go:89] found id: "3e058bfecc2b86d04151528b3bc3ea0c29858a4d4c2af94ff5e8cea26dd9438c"
	I0229 19:08:27.457020   47515 cri.go:89] found id: ""
	I0229 19:08:27.457029   47515 logs.go:276] 1 containers: [3e058bfecc2b86d04151528b3bc3ea0c29858a4d4c2af94ff5e8cea26dd9438c]
	I0229 19:08:27.457089   47515 ssh_runner.go:195] Run: which crictl
	I0229 19:08:27.462024   47515 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:08:27.462088   47515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:08:27.506509   47515 cri.go:89] found id: "d8cab5559badab1abf69f6d38522d0231303c529f2b78b96d073ce6f1117da43"
	I0229 19:08:27.506530   47515 cri.go:89] found id: ""
	I0229 19:08:27.506539   47515 logs.go:276] 1 containers: [d8cab5559badab1abf69f6d38522d0231303c529f2b78b96d073ce6f1117da43]
	I0229 19:08:27.506598   47515 ssh_runner.go:195] Run: which crictl
	I0229 19:08:27.511408   47515 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:08:27.511480   47515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:08:27.558522   47515 cri.go:89] found id: "2c68222b7809e6dde0a01964d3dcac0dd1789c074f0b42457a8d52ddf09a616a"
	I0229 19:08:27.558545   47515 cri.go:89] found id: ""
	I0229 19:08:27.558554   47515 logs.go:276] 1 containers: [2c68222b7809e6dde0a01964d3dcac0dd1789c074f0b42457a8d52ddf09a616a]
	I0229 19:08:27.558638   47515 ssh_runner.go:195] Run: which crictl
	I0229 19:08:27.566043   47515 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:08:27.566119   47515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:08:27.613465   47515 cri.go:89] found id: "ecdd7783c174633c8e5cbf63bbf2bd9a803b88800ec269f503d1974990869365"
	I0229 19:08:27.613486   47515 cri.go:89] found id: ""
	I0229 19:08:27.613495   47515 logs.go:276] 1 containers: [ecdd7783c174633c8e5cbf63bbf2bd9a803b88800ec269f503d1974990869365]
	I0229 19:08:27.613556   47515 ssh_runner.go:195] Run: which crictl
	I0229 19:08:27.618347   47515 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:08:27.618412   47515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:08:27.668486   47515 cri.go:89] found id: "9661e52ccd784fecb3c7d46bee0dd7f089a63bf8729d5990840d79192dc01b35"
	I0229 19:08:27.668510   47515 cri.go:89] found id: ""
	I0229 19:08:27.668519   47515 logs.go:276] 1 containers: [9661e52ccd784fecb3c7d46bee0dd7f089a63bf8729d5990840d79192dc01b35]
	I0229 19:08:27.668572   47515 ssh_runner.go:195] Run: which crictl
	I0229 19:08:27.673416   47515 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:08:27.673476   47515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:08:27.718790   47515 cri.go:89] found id: ""
	I0229 19:08:27.718813   47515 logs.go:276] 0 containers: []
	W0229 19:08:27.718824   47515 logs.go:278] No container was found matching "kindnet"
	I0229 19:08:27.718831   47515 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 19:08:27.718888   47515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 19:08:27.766906   47515 cri.go:89] found id: "c77d304aa104bed3e19645c4702145c26a9b0c8580d21f085e5eac06fa73ca2c"
	I0229 19:08:27.766988   47515 cri.go:89] found id: ""
	I0229 19:08:27.767005   47515 logs.go:276] 1 containers: [c77d304aa104bed3e19645c4702145c26a9b0c8580d21f085e5eac06fa73ca2c]
	I0229 19:08:27.767082   47515 ssh_runner.go:195] Run: which crictl
	I0229 19:08:27.772046   47515 logs.go:123] Gathering logs for dmesg ...
	I0229 19:08:27.772073   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:08:27.789085   47515 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:08:27.789118   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 19:08:27.915599   47515 logs.go:123] Gathering logs for kube-apiserver [6edf3acff7dee1fec0aa396e614c8d822a2a7e2074e2da8863c3427cda994799] ...
	I0229 19:08:27.915629   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6edf3acff7dee1fec0aa396e614c8d822a2a7e2074e2da8863c3427cda994799"
	I0229 19:08:28.022219   47515 logs.go:123] Gathering logs for coredns [d8cab5559badab1abf69f6d38522d0231303c529f2b78b96d073ce6f1117da43] ...
	I0229 19:08:28.022253   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8cab5559badab1abf69f6d38522d0231303c529f2b78b96d073ce6f1117da43"
	I0229 19:08:28.068916   47515 logs.go:123] Gathering logs for kube-proxy [ecdd7783c174633c8e5cbf63bbf2bd9a803b88800ec269f503d1974990869365] ...
	I0229 19:08:28.068942   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecdd7783c174633c8e5cbf63bbf2bd9a803b88800ec269f503d1974990869365"
	I0229 19:08:28.116119   47515 logs.go:123] Gathering logs for storage-provisioner [c77d304aa104bed3e19645c4702145c26a9b0c8580d21f085e5eac06fa73ca2c] ...
	I0229 19:08:28.116145   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c77d304aa104bed3e19645c4702145c26a9b0c8580d21f085e5eac06fa73ca2c"
	I0229 19:08:28.158177   47515 logs.go:123] Gathering logs for kubelet ...
	I0229 19:08:28.158206   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:08:28.256419   47515 logs.go:123] Gathering logs for etcd [3e058bfecc2b86d04151528b3bc3ea0c29858a4d4c2af94ff5e8cea26dd9438c] ...
	I0229 19:08:28.256452   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e058bfecc2b86d04151528b3bc3ea0c29858a4d4c2af94ff5e8cea26dd9438c"
	I0229 19:08:28.310964   47515 logs.go:123] Gathering logs for kube-scheduler [2c68222b7809e6dde0a01964d3dcac0dd1789c074f0b42457a8d52ddf09a616a] ...
	I0229 19:08:28.310995   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c68222b7809e6dde0a01964d3dcac0dd1789c074f0b42457a8d52ddf09a616a"
	I0229 19:08:28.366330   47515 logs.go:123] Gathering logs for kube-controller-manager [9661e52ccd784fecb3c7d46bee0dd7f089a63bf8729d5990840d79192dc01b35] ...
	I0229 19:08:28.366361   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9661e52ccd784fecb3c7d46bee0dd7f089a63bf8729d5990840d79192dc01b35"
	I0229 19:08:28.432543   47515 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:08:28.432577   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:08:28.839513   47515 logs.go:123] Gathering logs for container status ...
	I0229 19:08:28.839550   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:08:28.889908   47515 logs.go:123] Gathering logs for kube-apiserver [730a369e2636f53e3085f418ca1fe59278c5988e5639018d26088db3e0d29a9a] ...
	I0229 19:08:28.889935   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 730a369e2636f53e3085f418ca1fe59278c5988e5639018d26088db3e0d29a9a"
	I0229 19:08:31.447297   47515 api_server.go:253] Checking apiserver healthz at https://192.168.50.72:8443/healthz ...
	I0229 19:08:31.456672   47515 api_server.go:279] https://192.168.50.72:8443/healthz returned 200:
	ok
	I0229 19:08:31.457930   47515 api_server.go:141] control plane version: v1.29.0-rc.2
	I0229 19:08:31.457948   47515 api_server.go:131] duration metric: took 4.09618563s to wait for apiserver health ...
	I0229 19:08:31.457955   47515 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 19:08:31.457974   47515 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:08:31.458020   47515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:08:31.507399   47515 cri.go:89] found id: "730a369e2636f53e3085f418ca1fe59278c5988e5639018d26088db3e0d29a9a"
	I0229 19:08:31.507419   47515 cri.go:89] found id: "6edf3acff7dee1fec0aa396e614c8d822a2a7e2074e2da8863c3427cda994799"
	I0229 19:08:31.507424   47515 cri.go:89] found id: ""
	I0229 19:08:31.507433   47515 logs.go:276] 2 containers: [730a369e2636f53e3085f418ca1fe59278c5988e5639018d26088db3e0d29a9a 6edf3acff7dee1fec0aa396e614c8d822a2a7e2074e2da8863c3427cda994799]
	I0229 19:08:31.507493   47515 ssh_runner.go:195] Run: which crictl
	I0229 19:08:31.512606   47515 ssh_runner.go:195] Run: which crictl
	I0229 19:08:31.516990   47515 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:08:31.517059   47515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:08:31.558856   47515 cri.go:89] found id: "3e058bfecc2b86d04151528b3bc3ea0c29858a4d4c2af94ff5e8cea26dd9438c"
	I0229 19:08:31.558878   47515 cri.go:89] found id: ""
	I0229 19:08:31.558886   47515 logs.go:276] 1 containers: [3e058bfecc2b86d04151528b3bc3ea0c29858a4d4c2af94ff5e8cea26dd9438c]
	I0229 19:08:31.558943   47515 ssh_runner.go:195] Run: which crictl
	I0229 19:08:31.564106   47515 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:08:31.564173   47515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:08:31.607870   47515 cri.go:89] found id: "d8cab5559badab1abf69f6d38522d0231303c529f2b78b96d073ce6f1117da43"
	I0229 19:08:31.607891   47515 cri.go:89] found id: ""
	I0229 19:08:31.607901   47515 logs.go:276] 1 containers: [d8cab5559badab1abf69f6d38522d0231303c529f2b78b96d073ce6f1117da43]
	I0229 19:08:31.607963   47515 ssh_runner.go:195] Run: which crictl
	I0229 19:08:31.612655   47515 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:08:31.612706   47515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:08:31.653422   47515 cri.go:89] found id: "2c68222b7809e6dde0a01964d3dcac0dd1789c074f0b42457a8d52ddf09a616a"
	I0229 19:08:31.653442   47515 cri.go:89] found id: ""
	I0229 19:08:31.653455   47515 logs.go:276] 1 containers: [2c68222b7809e6dde0a01964d3dcac0dd1789c074f0b42457a8d52ddf09a616a]
	I0229 19:08:31.653516   47515 ssh_runner.go:195] Run: which crictl
	I0229 19:08:31.659010   47515 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:08:31.659086   47515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:08:31.705187   47515 cri.go:89] found id: "ecdd7783c174633c8e5cbf63bbf2bd9a803b88800ec269f503d1974990869365"
	I0229 19:08:31.705210   47515 cri.go:89] found id: ""
	I0229 19:08:31.705219   47515 logs.go:276] 1 containers: [ecdd7783c174633c8e5cbf63bbf2bd9a803b88800ec269f503d1974990869365]
	I0229 19:08:31.705333   47515 ssh_runner.go:195] Run: which crictl
	I0229 19:08:31.710080   47515 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:08:31.710130   47515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:08:31.752967   47515 cri.go:89] found id: "9661e52ccd784fecb3c7d46bee0dd7f089a63bf8729d5990840d79192dc01b35"
	I0229 19:08:31.752991   47515 cri.go:89] found id: ""
	I0229 19:08:31.753000   47515 logs.go:276] 1 containers: [9661e52ccd784fecb3c7d46bee0dd7f089a63bf8729d5990840d79192dc01b35]
	I0229 19:08:31.753061   47515 ssh_runner.go:195] Run: which crictl
	I0229 19:08:31.757915   47515 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:08:31.757983   47515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:08:31.798767   47515 cri.go:89] found id: ""
	I0229 19:08:31.798794   47515 logs.go:276] 0 containers: []
	W0229 19:08:31.798804   47515 logs.go:278] No container was found matching "kindnet"
	I0229 19:08:31.798812   47515 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 19:08:31.798872   47515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 19:08:31.841051   47515 cri.go:89] found id: "c77d304aa104bed3e19645c4702145c26a9b0c8580d21f085e5eac06fa73ca2c"
	I0229 19:08:31.841071   47515 cri.go:89] found id: ""
	I0229 19:08:31.841078   47515 logs.go:276] 1 containers: [c77d304aa104bed3e19645c4702145c26a9b0c8580d21f085e5eac06fa73ca2c]
	I0229 19:08:31.841133   47515 ssh_runner.go:195] Run: which crictl
	I0229 19:08:31.845698   47515 logs.go:123] Gathering logs for storage-provisioner [c77d304aa104bed3e19645c4702145c26a9b0c8580d21f085e5eac06fa73ca2c] ...
	I0229 19:08:31.845732   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c77d304aa104bed3e19645c4702145c26a9b0c8580d21f085e5eac06fa73ca2c"
	I0229 19:08:31.887190   47515 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:08:31.887218   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:08:32.264861   47515 logs.go:123] Gathering logs for kubelet ...
	I0229 19:08:32.264892   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:08:32.367323   47515 logs.go:123] Gathering logs for kube-apiserver [730a369e2636f53e3085f418ca1fe59278c5988e5639018d26088db3e0d29a9a] ...
	I0229 19:08:32.367364   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 730a369e2636f53e3085f418ca1fe59278c5988e5639018d26088db3e0d29a9a"
	I0229 19:08:32.416687   47515 logs.go:123] Gathering logs for coredns [d8cab5559badab1abf69f6d38522d0231303c529f2b78b96d073ce6f1117da43] ...
	I0229 19:08:32.416714   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8cab5559badab1abf69f6d38522d0231303c529f2b78b96d073ce6f1117da43"
	I0229 19:08:32.458459   47515 logs.go:123] Gathering logs for etcd [3e058bfecc2b86d04151528b3bc3ea0c29858a4d4c2af94ff5e8cea26dd9438c] ...
	I0229 19:08:32.458486   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e058bfecc2b86d04151528b3bc3ea0c29858a4d4c2af94ff5e8cea26dd9438c"
	I0229 19:08:32.502450   47515 logs.go:123] Gathering logs for kube-scheduler [2c68222b7809e6dde0a01964d3dcac0dd1789c074f0b42457a8d52ddf09a616a] ...
	I0229 19:08:32.502476   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c68222b7809e6dde0a01964d3dcac0dd1789c074f0b42457a8d52ddf09a616a"
	I0229 19:08:32.555285   47515 logs.go:123] Gathering logs for kube-proxy [ecdd7783c174633c8e5cbf63bbf2bd9a803b88800ec269f503d1974990869365] ...
	I0229 19:08:32.555311   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecdd7783c174633c8e5cbf63bbf2bd9a803b88800ec269f503d1974990869365"
	I0229 19:08:32.602273   47515 logs.go:123] Gathering logs for kube-controller-manager [9661e52ccd784fecb3c7d46bee0dd7f089a63bf8729d5990840d79192dc01b35] ...
	I0229 19:08:32.602303   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9661e52ccd784fecb3c7d46bee0dd7f089a63bf8729d5990840d79192dc01b35"
	I0229 19:08:32.655346   47515 logs.go:123] Gathering logs for container status ...
	I0229 19:08:32.655373   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:08:32.716233   47515 logs.go:123] Gathering logs for dmesg ...
	I0229 19:08:32.716262   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:08:32.733285   47515 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:08:32.733311   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 19:08:32.854014   47515 logs.go:123] Gathering logs for kube-apiserver [6edf3acff7dee1fec0aa396e614c8d822a2a7e2074e2da8863c3427cda994799] ...
	I0229 19:08:32.854038   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6edf3acff7dee1fec0aa396e614c8d822a2a7e2074e2da8863c3427cda994799"
	I0229 19:08:35.460690   47515 system_pods.go:59] 8 kube-system pods found
	I0229 19:08:35.460717   47515 system_pods.go:61] "coredns-76f75df574-9z6k5" [818ddb56-c41b-4aae-8490-a9559498eecb] Running
	I0229 19:08:35.460721   47515 system_pods.go:61] "etcd-no-preload-247197" [c6da002d-16f1-4063-9614-f07d5ca6fde8] Running
	I0229 19:08:35.460725   47515 system_pods.go:61] "kube-apiserver-no-preload-247197" [4b330572-426b-414f-bc3f-0b6936d52831] Running
	I0229 19:08:35.460728   47515 system_pods.go:61] "kube-controller-manager-no-preload-247197" [e587f362-08db-4542-9a20-c5422f6607cc] Running
	I0229 19:08:35.460731   47515 system_pods.go:61] "kube-proxy-vvkjv" [b5b911d8-c127-4008-a279-5f1cac593457] Running
	I0229 19:08:35.460734   47515 system_pods.go:61] "kube-scheduler-no-preload-247197" [0063db5e-a134-4cd4-b3d9-90b771e141c4] Running
	I0229 19:08:35.460740   47515 system_pods.go:61] "metrics-server-57f55c9bc5-nj5h7" [c53f2987-829e-4bea-8075-16af3a59249f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 19:08:35.460743   47515 system_pods.go:61] "storage-provisioner" [3c361786-e6d8-4cb4-81c3-387677a3bb05] Running
	I0229 19:08:35.460750   47515 system_pods.go:74] duration metric: took 4.002789673s to wait for pod list to return data ...
	I0229 19:08:35.460757   47515 default_sa.go:34] waiting for default service account to be created ...
	I0229 19:08:35.463218   47515 default_sa.go:45] found service account: "default"
	I0229 19:08:35.463248   47515 default_sa.go:55] duration metric: took 2.483102ms for default service account to be created ...
	I0229 19:08:35.463261   47515 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 19:08:35.469351   47515 system_pods.go:86] 8 kube-system pods found
	I0229 19:08:35.469372   47515 system_pods.go:89] "coredns-76f75df574-9z6k5" [818ddb56-c41b-4aae-8490-a9559498eecb] Running
	I0229 19:08:35.469377   47515 system_pods.go:89] "etcd-no-preload-247197" [c6da002d-16f1-4063-9614-f07d5ca6fde8] Running
	I0229 19:08:35.469383   47515 system_pods.go:89] "kube-apiserver-no-preload-247197" [4b330572-426b-414f-bc3f-0b6936d52831] Running
	I0229 19:08:35.469388   47515 system_pods.go:89] "kube-controller-manager-no-preload-247197" [e587f362-08db-4542-9a20-c5422f6607cc] Running
	I0229 19:08:35.469392   47515 system_pods.go:89] "kube-proxy-vvkjv" [b5b911d8-c127-4008-a279-5f1cac593457] Running
	I0229 19:08:35.469396   47515 system_pods.go:89] "kube-scheduler-no-preload-247197" [0063db5e-a134-4cd4-b3d9-90b771e141c4] Running
	I0229 19:08:35.469402   47515 system_pods.go:89] "metrics-server-57f55c9bc5-nj5h7" [c53f2987-829e-4bea-8075-16af3a59249f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 19:08:35.469407   47515 system_pods.go:89] "storage-provisioner" [3c361786-e6d8-4cb4-81c3-387677a3bb05] Running
	I0229 19:08:35.469415   47515 system_pods.go:126] duration metric: took 6.148455ms to wait for k8s-apps to be running ...
	I0229 19:08:35.469422   47515 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 19:08:35.469464   47515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 19:08:35.487453   47515 system_svc.go:56] duration metric: took 18.016016ms WaitForService to wait for kubelet.
	I0229 19:08:35.487485   47515 kubeadm.go:581] duration metric: took 4m15.104218747s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 19:08:35.487509   47515 node_conditions.go:102] verifying NodePressure condition ...
	I0229 19:08:35.490828   47515 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 19:08:35.490844   47515 node_conditions.go:123] node cpu capacity is 2
	I0229 19:08:35.490854   47515 node_conditions.go:105] duration metric: took 3.34147ms to run NodePressure ...
	I0229 19:08:35.490864   47515 start.go:228] waiting for startup goroutines ...
	I0229 19:08:35.490871   47515 start.go:233] waiting for cluster config update ...
	I0229 19:08:35.490881   47515 start.go:242] writing updated cluster config ...
	I0229 19:08:35.491140   47515 ssh_runner.go:195] Run: rm -f paused
	I0229 19:08:35.539922   47515 start.go:601] kubectl: 1.29.2, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0229 19:08:35.542171   47515 out.go:177] * Done! kubectl is now configured to use "no-preload-247197" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Feb 29 19:16:16 embed-certs-991128 crio[671]: time="2024-02-29 19:16:16.061808337Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709234176061702877,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=851a416c-e804-41de-be9f-54144aaf3ad4 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 19:16:16 embed-certs-991128 crio[671]: time="2024-02-29 19:16:16.062525084Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8d8002ad-c18c-4030-ae65-d2aa5b3b94ab name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:16:16 embed-certs-991128 crio[671]: time="2024-02-29 19:16:16.062578581Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8d8002ad-c18c-4030-ae65-d2aa5b3b94ab name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:16:16 embed-certs-991128 crio[671]: time="2024-02-29 19:16:16.062852896Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6d4d0c25cc63929278e7e44eb4cd9bc93f376b8bb76a05b78a29d9d9b9794ada,PodSandboxId:d021343dc78c4c8fff740ae383784d90d75d3ca0eb97f4f9680d5d1d7496b029,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709233381501478855,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9ce642e-81dc-4dd7-be8e-3796e19f8f03,},Annotations:map[string]string{io.kubernetes.container.hash: 28dd27d7,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7220454898e1219d80ac33161ddc4116866d5cf1d1382767c1bf456cfa6b3c72,PodSandboxId:e8b69c01808092e60eb2934c57c1b4ab3db6198e2df112c64b87974d8dbadd2f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709233379574574830,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-nth8z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eeec9c32-9f61-4cb7-b1fb-3dd75c5af668,},Annotations:map[string]string{io.kubernetes.container.hash: 266168b7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3327a9756b71a79f82db55da77bd2297066ea76c84094764ff2fcbd04e4f528d,PodSandboxId:542b014e67e872c2082e9249b966712bb148b172e0a38ece70d5c85bb0f20f34,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709233379070088123,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5grst,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 355
24449-8c5a-440d-a45f-ce631ebff076,},Annotations:map[string]string{io.kubernetes.container.hash: ac0db45a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:795516eef7b679f1eefaf293eab6b1683326e3fcb3f4e5f848194e4a7ab1566e,PodSandboxId:4199ed14d97b0118203b50e45f45ab826ce09cf0cc4da0ef56dbee5cce4b9101,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709233359498073327,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-991128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f60d3a28ff8f8340730bf0057041fb20,},Annota
tions:map[string]string{io.kubernetes.container.hash: 13b0311e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1accc151694b40814505327a0743f46796dacf516f08043f58b1c42147e25fe,PodSandboxId:814a5a953c233b6d0febf2ff987abd74715833ed7cafd0554b1076e62af233c4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709233359444387154,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-991128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e22c8a948f076983154faaffa6d2b95,},Annotations:map[st
ring]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9099ab49263e57e41e6b7b28a87e810a951b495e3d0c879bf7ac045d0d2c2bd0,PodSandboxId:31761f95bbfbbe203a3cba92428b86af56068633459259fe1714dce8e1217961,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709233359451841969,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-991128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cea9e64667edc13c8ed77ee608a410bf,},Ann
otations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18f508cd43779fa9b03e93445f4d03abfe5f3e6291dc05cf116f16714d04ec96,PodSandboxId:8fd8c4a1941cadd559b51da7b96b95d27f98cfdf47952563226a91f64bb269df,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709233359440340624,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-991128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f9594481c9c21af2b85fe50da50c97f,},Annotations:map
[string]string{io.kubernetes.container.hash: 68d8cdba,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8d8002ad-c18c-4030-ae65-d2aa5b3b94ab name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:16:16 embed-certs-991128 crio[671]: time="2024-02-29 19:16:16.112497792Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=509daf51-c1a4-4d5b-aeda-05d70a251cb3 name=/runtime.v1.RuntimeService/Version
	Feb 29 19:16:16 embed-certs-991128 crio[671]: time="2024-02-29 19:16:16.112597519Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=509daf51-c1a4-4d5b-aeda-05d70a251cb3 name=/runtime.v1.RuntimeService/Version
	Feb 29 19:16:16 embed-certs-991128 crio[671]: time="2024-02-29 19:16:16.117310322Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=785a6d2b-cf82-4d6b-8387-d8e11c4282ce name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 19:16:16 embed-certs-991128 crio[671]: time="2024-02-29 19:16:16.117809400Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709234176117697599,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=785a6d2b-cf82-4d6b-8387-d8e11c4282ce name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 19:16:16 embed-certs-991128 crio[671]: time="2024-02-29 19:16:16.118332221Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d94c77ac-4134-407d-9dc4-90fb29da2a20 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:16:16 embed-certs-991128 crio[671]: time="2024-02-29 19:16:16.118411910Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d94c77ac-4134-407d-9dc4-90fb29da2a20 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:16:16 embed-certs-991128 crio[671]: time="2024-02-29 19:16:16.118577392Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6d4d0c25cc63929278e7e44eb4cd9bc93f376b8bb76a05b78a29d9d9b9794ada,PodSandboxId:d021343dc78c4c8fff740ae383784d90d75d3ca0eb97f4f9680d5d1d7496b029,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709233381501478855,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9ce642e-81dc-4dd7-be8e-3796e19f8f03,},Annotations:map[string]string{io.kubernetes.container.hash: 28dd27d7,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7220454898e1219d80ac33161ddc4116866d5cf1d1382767c1bf456cfa6b3c72,PodSandboxId:e8b69c01808092e60eb2934c57c1b4ab3db6198e2df112c64b87974d8dbadd2f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709233379574574830,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-nth8z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eeec9c32-9f61-4cb7-b1fb-3dd75c5af668,},Annotations:map[string]string{io.kubernetes.container.hash: 266168b7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3327a9756b71a79f82db55da77bd2297066ea76c84094764ff2fcbd04e4f528d,PodSandboxId:542b014e67e872c2082e9249b966712bb148b172e0a38ece70d5c85bb0f20f34,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709233379070088123,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5grst,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 355
24449-8c5a-440d-a45f-ce631ebff076,},Annotations:map[string]string{io.kubernetes.container.hash: ac0db45a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:795516eef7b679f1eefaf293eab6b1683326e3fcb3f4e5f848194e4a7ab1566e,PodSandboxId:4199ed14d97b0118203b50e45f45ab826ce09cf0cc4da0ef56dbee5cce4b9101,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709233359498073327,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-991128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f60d3a28ff8f8340730bf0057041fb20,},Annota
tions:map[string]string{io.kubernetes.container.hash: 13b0311e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1accc151694b40814505327a0743f46796dacf516f08043f58b1c42147e25fe,PodSandboxId:814a5a953c233b6d0febf2ff987abd74715833ed7cafd0554b1076e62af233c4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709233359444387154,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-991128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e22c8a948f076983154faaffa6d2b95,},Annotations:map[st
ring]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9099ab49263e57e41e6b7b28a87e810a951b495e3d0c879bf7ac045d0d2c2bd0,PodSandboxId:31761f95bbfbbe203a3cba92428b86af56068633459259fe1714dce8e1217961,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709233359451841969,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-991128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cea9e64667edc13c8ed77ee608a410bf,},Ann
otations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18f508cd43779fa9b03e93445f4d03abfe5f3e6291dc05cf116f16714d04ec96,PodSandboxId:8fd8c4a1941cadd559b51da7b96b95d27f98cfdf47952563226a91f64bb269df,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709233359440340624,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-991128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f9594481c9c21af2b85fe50da50c97f,},Annotations:map
[string]string{io.kubernetes.container.hash: 68d8cdba,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d94c77ac-4134-407d-9dc4-90fb29da2a20 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:16:16 embed-certs-991128 crio[671]: time="2024-02-29 19:16:16.163076455Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9d4620d6-e8c1-4d0d-b20c-6a6af4e9e11d name=/runtime.v1.RuntimeService/Version
	Feb 29 19:16:16 embed-certs-991128 crio[671]: time="2024-02-29 19:16:16.163212435Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9d4620d6-e8c1-4d0d-b20c-6a6af4e9e11d name=/runtime.v1.RuntimeService/Version
	Feb 29 19:16:16 embed-certs-991128 crio[671]: time="2024-02-29 19:16:16.165081314Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=50cc74bb-e37f-4a01-84d6-7ffebe78857f name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 19:16:16 embed-certs-991128 crio[671]: time="2024-02-29 19:16:16.165491903Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709234176165468834,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=50cc74bb-e37f-4a01-84d6-7ffebe78857f name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 19:16:16 embed-certs-991128 crio[671]: time="2024-02-29 19:16:16.166101153Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=083cc6d8-92b5-40c5-a181-57dcdc2a4c7d name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:16:16 embed-certs-991128 crio[671]: time="2024-02-29 19:16:16.166187578Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=083cc6d8-92b5-40c5-a181-57dcdc2a4c7d name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:16:16 embed-certs-991128 crio[671]: time="2024-02-29 19:16:16.166808086Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6d4d0c25cc63929278e7e44eb4cd9bc93f376b8bb76a05b78a29d9d9b9794ada,PodSandboxId:d021343dc78c4c8fff740ae383784d90d75d3ca0eb97f4f9680d5d1d7496b029,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709233381501478855,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9ce642e-81dc-4dd7-be8e-3796e19f8f03,},Annotations:map[string]string{io.kubernetes.container.hash: 28dd27d7,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7220454898e1219d80ac33161ddc4116866d5cf1d1382767c1bf456cfa6b3c72,PodSandboxId:e8b69c01808092e60eb2934c57c1b4ab3db6198e2df112c64b87974d8dbadd2f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709233379574574830,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-nth8z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eeec9c32-9f61-4cb7-b1fb-3dd75c5af668,},Annotations:map[string]string{io.kubernetes.container.hash: 266168b7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3327a9756b71a79f82db55da77bd2297066ea76c84094764ff2fcbd04e4f528d,PodSandboxId:542b014e67e872c2082e9249b966712bb148b172e0a38ece70d5c85bb0f20f34,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709233379070088123,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5grst,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 355
24449-8c5a-440d-a45f-ce631ebff076,},Annotations:map[string]string{io.kubernetes.container.hash: ac0db45a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:795516eef7b679f1eefaf293eab6b1683326e3fcb3f4e5f848194e4a7ab1566e,PodSandboxId:4199ed14d97b0118203b50e45f45ab826ce09cf0cc4da0ef56dbee5cce4b9101,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709233359498073327,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-991128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f60d3a28ff8f8340730bf0057041fb20,},Annota
tions:map[string]string{io.kubernetes.container.hash: 13b0311e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1accc151694b40814505327a0743f46796dacf516f08043f58b1c42147e25fe,PodSandboxId:814a5a953c233b6d0febf2ff987abd74715833ed7cafd0554b1076e62af233c4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709233359444387154,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-991128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e22c8a948f076983154faaffa6d2b95,},Annotations:map[st
ring]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9099ab49263e57e41e6b7b28a87e810a951b495e3d0c879bf7ac045d0d2c2bd0,PodSandboxId:31761f95bbfbbe203a3cba92428b86af56068633459259fe1714dce8e1217961,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709233359451841969,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-991128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cea9e64667edc13c8ed77ee608a410bf,},Ann
otations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18f508cd43779fa9b03e93445f4d03abfe5f3e6291dc05cf116f16714d04ec96,PodSandboxId:8fd8c4a1941cadd559b51da7b96b95d27f98cfdf47952563226a91f64bb269df,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709233359440340624,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-991128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f9594481c9c21af2b85fe50da50c97f,},Annotations:map
[string]string{io.kubernetes.container.hash: 68d8cdba,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=083cc6d8-92b5-40c5-a181-57dcdc2a4c7d name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:16:16 embed-certs-991128 crio[671]: time="2024-02-29 19:16:16.205884837Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f9162f15-b82f-4e8b-a879-6e69171b488b name=/runtime.v1.RuntimeService/Version
	Feb 29 19:16:16 embed-certs-991128 crio[671]: time="2024-02-29 19:16:16.205978870Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f9162f15-b82f-4e8b-a879-6e69171b488b name=/runtime.v1.RuntimeService/Version
	Feb 29 19:16:16 embed-certs-991128 crio[671]: time="2024-02-29 19:16:16.207826807Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ee977263-5370-4bfa-a1e8-868c45b3a409 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 19:16:16 embed-certs-991128 crio[671]: time="2024-02-29 19:16:16.208670930Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709234176208581122,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ee977263-5370-4bfa-a1e8-868c45b3a409 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 19:16:16 embed-certs-991128 crio[671]: time="2024-02-29 19:16:16.209334894Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b8ee33b7-bd3f-46ca-a0e0-3df2daf538fa name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:16:16 embed-certs-991128 crio[671]: time="2024-02-29 19:16:16.209417001Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b8ee33b7-bd3f-46ca-a0e0-3df2daf538fa name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:16:16 embed-certs-991128 crio[671]: time="2024-02-29 19:16:16.209591666Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6d4d0c25cc63929278e7e44eb4cd9bc93f376b8bb76a05b78a29d9d9b9794ada,PodSandboxId:d021343dc78c4c8fff740ae383784d90d75d3ca0eb97f4f9680d5d1d7496b029,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709233381501478855,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9ce642e-81dc-4dd7-be8e-3796e19f8f03,},Annotations:map[string]string{io.kubernetes.container.hash: 28dd27d7,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7220454898e1219d80ac33161ddc4116866d5cf1d1382767c1bf456cfa6b3c72,PodSandboxId:e8b69c01808092e60eb2934c57c1b4ab3db6198e2df112c64b87974d8dbadd2f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709233379574574830,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-nth8z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eeec9c32-9f61-4cb7-b1fb-3dd75c5af668,},Annotations:map[string]string{io.kubernetes.container.hash: 266168b7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3327a9756b71a79f82db55da77bd2297066ea76c84094764ff2fcbd04e4f528d,PodSandboxId:542b014e67e872c2082e9249b966712bb148b172e0a38ece70d5c85bb0f20f34,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709233379070088123,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5grst,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 355
24449-8c5a-440d-a45f-ce631ebff076,},Annotations:map[string]string{io.kubernetes.container.hash: ac0db45a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:795516eef7b679f1eefaf293eab6b1683326e3fcb3f4e5f848194e4a7ab1566e,PodSandboxId:4199ed14d97b0118203b50e45f45ab826ce09cf0cc4da0ef56dbee5cce4b9101,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709233359498073327,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-991128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f60d3a28ff8f8340730bf0057041fb20,},Annota
tions:map[string]string{io.kubernetes.container.hash: 13b0311e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1accc151694b40814505327a0743f46796dacf516f08043f58b1c42147e25fe,PodSandboxId:814a5a953c233b6d0febf2ff987abd74715833ed7cafd0554b1076e62af233c4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709233359444387154,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-991128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e22c8a948f076983154faaffa6d2b95,},Annotations:map[st
ring]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9099ab49263e57e41e6b7b28a87e810a951b495e3d0c879bf7ac045d0d2c2bd0,PodSandboxId:31761f95bbfbbe203a3cba92428b86af56068633459259fe1714dce8e1217961,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709233359451841969,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-991128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cea9e64667edc13c8ed77ee608a410bf,},Ann
otations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18f508cd43779fa9b03e93445f4d03abfe5f3e6291dc05cf116f16714d04ec96,PodSandboxId:8fd8c4a1941cadd559b51da7b96b95d27f98cfdf47952563226a91f64bb269df,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709233359440340624,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-991128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f9594481c9c21af2b85fe50da50c97f,},Annotations:map
[string]string{io.kubernetes.container.hash: 68d8cdba,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b8ee33b7-bd3f-46ca-a0e0-3df2daf538fa name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6d4d0c25cc639       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 minutes ago      Running             storage-provisioner       0                   d021343dc78c4       storage-provisioner
	7220454898e12       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   13 minutes ago      Running             coredns                   0                   e8b69c0180809       coredns-5dd5756b68-nth8z
	3327a9756b71a       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   13 minutes ago      Running             kube-proxy                0                   542b014e67e87       kube-proxy-5grst
	795516eef7b67       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   13 minutes ago      Running             etcd                      2                   4199ed14d97b0       etcd-embed-certs-991128
	9099ab49263e5       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   13 minutes ago      Running             kube-controller-manager   2                   31761f95bbfbb       kube-controller-manager-embed-certs-991128
	f1accc151694b       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   13 minutes ago      Running             kube-scheduler            2                   814a5a953c233       kube-scheduler-embed-certs-991128
	18f508cd43779       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   13 minutes ago      Running             kube-apiserver            2                   8fd8c4a1941ca       kube-apiserver-embed-certs-991128
	
	
	==> coredns [7220454898e1219d80ac33161ddc4116866d5cf1d1382767c1bf456cfa6b3c72] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	[INFO] Reloading complete
	[INFO] 127.0.0.1:57010 - 5854 "HINFO IN 4633225628833145899.670971604328587180. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.015183249s
	
	
	==> describe nodes <==
	Name:               embed-certs-991128
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-991128
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f9e09574fdbf0719156a1e892f7aeb8b71f0cf19
	                    minikube.k8s.io/name=embed-certs-991128
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_29T19_02_46_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Feb 2024 19:02:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-991128
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Feb 2024 19:16:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Feb 2024 19:13:19 +0000   Thu, 29 Feb 2024 19:02:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Feb 2024 19:13:19 +0000   Thu, 29 Feb 2024 19:02:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Feb 2024 19:13:19 +0000   Thu, 29 Feb 2024 19:02:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Feb 2024 19:13:19 +0000   Thu, 29 Feb 2024 19:02:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.34
	  Hostname:    embed-certs-991128
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 60eb2ab6d53b4d4cad87df9e82bf910b
	  System UUID:                60eb2ab6-d53b-4d4c-ad87-df9e82bf910b
	  Boot ID:                    3d3f6535-305d-44f2-ad07-f57f11ba5710
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-nth8z                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-embed-certs-991128                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kube-apiserver-embed-certs-991128             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-embed-certs-991128    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-5grst                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-embed-certs-991128             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 metrics-server-57f55c9bc5-r66xw               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node embed-certs-991128 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node embed-certs-991128 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node embed-certs-991128 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m                kubelet          Node embed-certs-991128 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m                kubelet          Node embed-certs-991128 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m                kubelet          Node embed-certs-991128 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             13m                kubelet          Node embed-certs-991128 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                13m                kubelet          Node embed-certs-991128 status is now: NodeReady
	  Normal  RegisteredNode           13m                node-controller  Node embed-certs-991128 event: Registered Node embed-certs-991128 in Controller
	
	
	==> dmesg <==
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051235] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041921] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.527779] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.311241] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.714758] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.348116] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.063973] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062665] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.231859] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +0.144249] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.278556] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[ +17.190029] systemd-fstab-generator[870]: Ignoring "noauto" option for root device
	[  +0.063270] kauditd_printk_skb: 130 callbacks suppressed
	[  +5.700407] kauditd_printk_skb: 72 callbacks suppressed
	[  +7.468648] kauditd_printk_skb: 69 callbacks suppressed
	[Feb29 19:02] kauditd_printk_skb: 4 callbacks suppressed
	[  +1.234285] systemd-fstab-generator[3362]: Ignoring "noauto" option for root device
	[  +4.665382] kauditd_printk_skb: 55 callbacks suppressed
	[  +3.122649] systemd-fstab-generator[3683]: Ignoring "noauto" option for root device
	[ +12.974486] kauditd_printk_skb: 14 callbacks suppressed
	[Feb29 19:03] kauditd_printk_skb: 43 callbacks suppressed
	
	
	==> etcd [795516eef7b679f1eefaf293eab6b1683326e3fcb3f4e5f848194e4a7ab1566e] <==
	{"level":"info","ts":"2024-02-29T19:02:40.010853Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.61.34:2380"}
	{"level":"info","ts":"2024-02-29T19:02:40.010903Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.34:2380"}
	{"level":"info","ts":"2024-02-29T19:02:40.017619Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"860cec0469348f9b","initial-advertise-peer-urls":["https://192.168.61.34:2380"],"listen-peer-urls":["https://192.168.61.34:2380"],"advertise-client-urls":["https://192.168.61.34:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.34:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-02-29T19:02:40.017813Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-02-29T19:02:40.235843Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"860cec0469348f9b is starting a new election at term 1"}
	{"level":"info","ts":"2024-02-29T19:02:40.235943Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"860cec0469348f9b became pre-candidate at term 1"}
	{"level":"info","ts":"2024-02-29T19:02:40.235977Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"860cec0469348f9b received MsgPreVoteResp from 860cec0469348f9b at term 1"}
	{"level":"info","ts":"2024-02-29T19:02:40.236007Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"860cec0469348f9b became candidate at term 2"}
	{"level":"info","ts":"2024-02-29T19:02:40.236031Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"860cec0469348f9b received MsgVoteResp from 860cec0469348f9b at term 2"}
	{"level":"info","ts":"2024-02-29T19:02:40.236058Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"860cec0469348f9b became leader at term 2"}
	{"level":"info","ts":"2024-02-29T19:02:40.236082Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 860cec0469348f9b elected leader 860cec0469348f9b at term 2"}
	{"level":"info","ts":"2024-02-29T19:02:40.23931Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"860cec0469348f9b","local-member-attributes":"{Name:embed-certs-991128 ClientURLs:[https://192.168.61.34:2379]}","request-path":"/0/members/860cec0469348f9b/attributes","cluster-id":"3b988ca96e7ba1f2","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-29T19:02:40.24177Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T19:02:40.244933Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.34:2379"}
	{"level":"info","ts":"2024-02-29T19:02:40.241787Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T19:02:40.246665Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-29T19:02:40.250238Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T19:02:40.250807Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-29T19:02:40.267814Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-29T19:02:40.267877Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3b988ca96e7ba1f2","local-member-id":"860cec0469348f9b","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T19:02:40.267963Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T19:02:40.268003Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T19:12:40.471107Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":715}
	{"level":"info","ts":"2024-02-29T19:12:40.474106Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":715,"took":"2.630272ms","hash":2473211201}
	{"level":"info","ts":"2024-02-29T19:12:40.474176Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2473211201,"revision":715,"compact-revision":-1}
	
	
	==> kernel <==
	 19:16:16 up 19 min,  0 users,  load average: 0.08, 0.14, 0.11
	Linux embed-certs-991128 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [18f508cd43779fa9b03e93445f4d03abfe5f3e6291dc05cf116f16714d04ec96] <==
	I0229 19:12:42.616335       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0229 19:12:43.616789       1 handler_proxy.go:93] no RequestInfo found in the context
	E0229 19:12:43.616902       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	W0229 19:12:43.616924       1 handler_proxy.go:93] no RequestInfo found in the context
	E0229 19:12:43.617060       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0229 19:12:43.616935       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0229 19:12:43.618087       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0229 19:13:42.516501       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0229 19:13:43.617537       1 handler_proxy.go:93] no RequestInfo found in the context
	E0229 19:13:43.617683       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0229 19:13:43.617791       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0229 19:13:43.618605       1 handler_proxy.go:93] no RequestInfo found in the context
	E0229 19:13:43.618837       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0229 19:13:43.618871       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0229 19:14:42.517189       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0229 19:15:42.516069       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0229 19:15:43.618624       1 handler_proxy.go:93] no RequestInfo found in the context
	E0229 19:15:43.618690       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0229 19:15:43.618697       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0229 19:15:43.618985       1 handler_proxy.go:93] no RequestInfo found in the context
	E0229 19:15:43.619107       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0229 19:15:43.620806       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [9099ab49263e57e41e6b7b28a87e810a951b495e3d0c879bf7ac045d0d2c2bd0] <==
	I0229 19:10:28.879588       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 19:10:58.498841       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 19:10:58.889041       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 19:11:28.505405       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 19:11:28.899499       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 19:11:58.513009       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 19:11:58.908242       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 19:12:28.520943       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 19:12:28.915833       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 19:12:58.526522       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 19:12:58.924893       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 19:13:28.534079       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 19:13:28.935873       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 19:13:58.541950       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 19:13:58.945190       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0229 19:14:16.386401       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="295.527µs"
	I0229 19:14:27.381792       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="311.153µs"
	E0229 19:14:28.547330       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 19:14:28.953634       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 19:14:58.554510       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 19:14:58.963241       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 19:15:28.560222       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 19:15:28.972179       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 19:15:58.567094       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 19:15:58.981412       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [3327a9756b71a79f82db55da77bd2297066ea76c84094764ff2fcbd04e4f528d] <==
	I0229 19:03:00.180811       1 server_others.go:69] "Using iptables proxy"
	I0229 19:03:00.212117       1 node.go:141] Successfully retrieved node IP: 192.168.61.34
	I0229 19:03:00.381674       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0229 19:03:00.381697       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0229 19:03:00.386578       1 server_others.go:152] "Using iptables Proxier"
	I0229 19:03:00.386678       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0229 19:03:00.387250       1 server.go:846] "Version info" version="v1.28.4"
	I0229 19:03:00.387347       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0229 19:03:00.391407       1 config.go:188] "Starting service config controller"
	I0229 19:03:00.391962       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0229 19:03:00.391999       1 config.go:97] "Starting endpoint slice config controller"
	I0229 19:03:00.392053       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0229 19:03:00.395031       1 config.go:315] "Starting node config controller"
	I0229 19:03:00.395039       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0229 19:03:00.492543       1 shared_informer.go:318] Caches are synced for service config
	I0229 19:03:00.492559       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0229 19:03:00.495858       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [f1accc151694b40814505327a0743f46796dacf516f08043f58b1c42147e25fe] <==
	W0229 19:02:43.571026       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0229 19:02:43.571161       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0229 19:02:43.619457       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0229 19:02:43.619524       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0229 19:02:43.627008       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0229 19:02:43.627057       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0229 19:02:43.681069       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0229 19:02:43.681255       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0229 19:02:43.734002       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0229 19:02:43.734878       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0229 19:02:43.799372       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0229 19:02:43.799500       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0229 19:02:43.848210       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0229 19:02:43.848268       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0229 19:02:43.850520       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0229 19:02:43.850539       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0229 19:02:43.909247       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0229 19:02:43.909423       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0229 19:02:43.939099       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0229 19:02:43.939149       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0229 19:02:43.942837       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0229 19:02:43.942882       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0229 19:02:43.976828       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0229 19:02:43.976918       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0229 19:02:46.746332       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 29 19:13:50 embed-certs-991128 kubelet[3690]: E0229 19:13:50.364351    3690 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-r66xw" podUID="8eb63357-6b36-49f3-98a5-c74bb4a9b09c"
	Feb 29 19:14:01 embed-certs-991128 kubelet[3690]: E0229 19:14:01.379291    3690 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Feb 29 19:14:01 embed-certs-991128 kubelet[3690]: E0229 19:14:01.379616    3690 kuberuntime_image.go:53] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Feb 29 19:14:01 embed-certs-991128 kubelet[3690]: E0229 19:14:01.379959    3690 kuberuntime_manager.go:1261] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-pfjrx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Pr
obeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:F
ile,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-r66xw_kube-system(8eb63357-6b36-49f3-98a5-c74bb4a9b09c): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Feb 29 19:14:01 embed-certs-991128 kubelet[3690]: E0229 19:14:01.380075    3690 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-r66xw" podUID="8eb63357-6b36-49f3-98a5-c74bb4a9b09c"
	Feb 29 19:14:16 embed-certs-991128 kubelet[3690]: E0229 19:14:16.364904    3690 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-r66xw" podUID="8eb63357-6b36-49f3-98a5-c74bb4a9b09c"
	Feb 29 19:14:27 embed-certs-991128 kubelet[3690]: E0229 19:14:27.364903    3690 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-r66xw" podUID="8eb63357-6b36-49f3-98a5-c74bb4a9b09c"
	Feb 29 19:14:38 embed-certs-991128 kubelet[3690]: E0229 19:14:38.364087    3690 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-r66xw" podUID="8eb63357-6b36-49f3-98a5-c74bb4a9b09c"
	Feb 29 19:14:46 embed-certs-991128 kubelet[3690]: E0229 19:14:46.468785    3690 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 29 19:14:46 embed-certs-991128 kubelet[3690]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 29 19:14:46 embed-certs-991128 kubelet[3690]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 29 19:14:46 embed-certs-991128 kubelet[3690]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 29 19:14:46 embed-certs-991128 kubelet[3690]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 29 19:14:51 embed-certs-991128 kubelet[3690]: E0229 19:14:51.364685    3690 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-r66xw" podUID="8eb63357-6b36-49f3-98a5-c74bb4a9b09c"
	Feb 29 19:15:02 embed-certs-991128 kubelet[3690]: E0229 19:15:02.363700    3690 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-r66xw" podUID="8eb63357-6b36-49f3-98a5-c74bb4a9b09c"
	Feb 29 19:15:15 embed-certs-991128 kubelet[3690]: E0229 19:15:15.364956    3690 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-r66xw" podUID="8eb63357-6b36-49f3-98a5-c74bb4a9b09c"
	Feb 29 19:15:26 embed-certs-991128 kubelet[3690]: E0229 19:15:26.371337    3690 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-r66xw" podUID="8eb63357-6b36-49f3-98a5-c74bb4a9b09c"
	Feb 29 19:15:37 embed-certs-991128 kubelet[3690]: E0229 19:15:37.364312    3690 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-r66xw" podUID="8eb63357-6b36-49f3-98a5-c74bb4a9b09c"
	Feb 29 19:15:46 embed-certs-991128 kubelet[3690]: E0229 19:15:46.468685    3690 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 29 19:15:46 embed-certs-991128 kubelet[3690]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 29 19:15:46 embed-certs-991128 kubelet[3690]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 29 19:15:46 embed-certs-991128 kubelet[3690]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 29 19:15:46 embed-certs-991128 kubelet[3690]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 29 19:15:52 embed-certs-991128 kubelet[3690]: E0229 19:15:52.364639    3690 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-r66xw" podUID="8eb63357-6b36-49f3-98a5-c74bb4a9b09c"
	Feb 29 19:16:05 embed-certs-991128 kubelet[3690]: E0229 19:16:05.364006    3690 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-r66xw" podUID="8eb63357-6b36-49f3-98a5-c74bb4a9b09c"
	
	
	==> storage-provisioner [6d4d0c25cc63929278e7e44eb4cd9bc93f376b8bb76a05b78a29d9d9b9794ada] <==
	I0229 19:03:01.598428       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0229 19:03:01.615668       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0229 19:03:01.615843       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0229 19:03:01.645506       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d3c0560f-6c58-46c3-9e8c-87fe1f4fcc81", APIVersion:"v1", ResourceVersion:"461", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-991128_9133536c-f38b-4982-9f58-caff1afaff74 became leader
	I0229 19:03:01.646855       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0229 19:03:01.647128       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-991128_9133536c-f38b-4982-9f58-caff1afaff74!
	I0229 19:03:01.748366       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-991128_9133536c-f38b-4982-9f58-caff1afaff74!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-991128 -n embed-certs-991128
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-991128 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-r66xw
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-991128 describe pod metrics-server-57f55c9bc5-r66xw
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-991128 describe pod metrics-server-57f55c9bc5-r66xw: exit status 1 (63.89974ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-r66xw" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-991128 describe pod metrics-server-57f55c9bc5-r66xw: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.52s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.65s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-153528 -n default-k8s-diff-port-153528
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-02-29 19:16:58.83583051 +0000 UTC m=+5976.330814302
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-153528 -n default-k8s-diff-port-153528
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-153528 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-153528 logs -n 25: (1.37386271s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-247197                                   | no-preload-247197            | jenkins | v1.32.0 | 29 Feb 24 18:47 UTC | 29 Feb 24 18:49 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p pause-848791                                        | pause-848791                 | jenkins | v1.32.0 | 29 Feb 24 18:48 UTC | 29 Feb 24 18:48 UTC |
	| start   | -p embed-certs-991128                                  | embed-certs-991128           | jenkins | v1.32.0 | 29 Feb 24 18:48 UTC | 29 Feb 24 18:49 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-393248                              | cert-expiration-393248       | jenkins | v1.32.0 | 29 Feb 24 18:49 UTC | 29 Feb 24 18:49 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-393248                              | cert-expiration-393248       | jenkins | v1.32.0 | 29 Feb 24 18:49 UTC | 29 Feb 24 18:49 UTC |
	| delete  | -p                                                     | disable-driver-mounts-599421 | jenkins | v1.32.0 | 29 Feb 24 18:49 UTC | 29 Feb 24 18:49 UTC |
	|         | disable-driver-mounts-599421                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-153528 | jenkins | v1.32.0 | 29 Feb 24 18:49 UTC | 29 Feb 24 18:50 UTC |
	|         | default-k8s-diff-port-153528                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-247197             | no-preload-247197            | jenkins | v1.32.0 | 29 Feb 24 18:49 UTC | 29 Feb 24 18:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-247197                                   | no-preload-247197            | jenkins | v1.32.0 | 29 Feb 24 18:49 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-991128            | embed-certs-991128           | jenkins | v1.32.0 | 29 Feb 24 18:50 UTC | 29 Feb 24 18:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-991128                                  | embed-certs-991128           | jenkins | v1.32.0 | 29 Feb 24 18:50 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-153528  | default-k8s-diff-port-153528 | jenkins | v1.32.0 | 29 Feb 24 18:51 UTC | 29 Feb 24 18:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-153528 | jenkins | v1.32.0 | 29 Feb 24 18:51 UTC |                     |
	|         | default-k8s-diff-port-153528                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-631080        | old-k8s-version-631080       | jenkins | v1.32.0 | 29 Feb 24 18:51 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-247197                  | no-preload-247197            | jenkins | v1.32.0 | 29 Feb 24 18:52 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-991128                 | embed-certs-991128           | jenkins | v1.32.0 | 29 Feb 24 18:52 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-247197                                   | no-preload-247197            | jenkins | v1.32.0 | 29 Feb 24 18:52 UTC | 29 Feb 24 19:08 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| start   | -p embed-certs-991128                                  | embed-certs-991128           | jenkins | v1.32.0 | 29 Feb 24 18:52 UTC | 29 Feb 24 19:07 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-631080                              | old-k8s-version-631080       | jenkins | v1.32.0 | 29 Feb 24 18:53 UTC | 29 Feb 24 18:53 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-631080             | old-k8s-version-631080       | jenkins | v1.32.0 | 29 Feb 24 18:53 UTC | 29 Feb 24 18:53 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-631080                              | old-k8s-version-631080       | jenkins | v1.32.0 | 29 Feb 24 18:53 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-153528       | default-k8s-diff-port-153528 | jenkins | v1.32.0 | 29 Feb 24 18:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-153528 | jenkins | v1.32.0 | 29 Feb 24 18:53 UTC | 29 Feb 24 19:07 UTC |
	|         | default-k8s-diff-port-153528                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-631080                              | old-k8s-version-631080       | jenkins | v1.32.0 | 29 Feb 24 19:16 UTC | 29 Feb 24 19:16 UTC |
	| start   | -p newest-cni-130594 --memory=2200 --alsologtostderr   | newest-cni-130594            | jenkins | v1.32.0 | 29 Feb 24 19:16 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 19:16:58
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 19:16:58.995744   52590 out.go:291] Setting OutFile to fd 1 ...
	I0229 19:16:58.996307   52590 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 19:16:58.996327   52590 out.go:304] Setting ErrFile to fd 2...
	I0229 19:16:58.996334   52590 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 19:16:58.996770   52590 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6428/.minikube/bin
	I0229 19:16:58.997864   52590 out.go:298] Setting JSON to false
	I0229 19:16:58.998729   52590 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7163,"bootTime":1709227056,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 19:16:58.998808   52590 start.go:139] virtualization: kvm guest
	I0229 19:16:59.000879   52590 out.go:177] * [newest-cni-130594] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 19:16:59.002082   52590 notify.go:220] Checking for updates...
	I0229 19:16:59.002109   52590 out.go:177]   - MINIKUBE_LOCATION=18259
	I0229 19:16:59.003375   52590 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 19:16:59.004595   52590 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18259-6428/kubeconfig
	I0229 19:16:59.005809   52590 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6428/.minikube
	I0229 19:16:59.007062   52590 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 19:16:59.008233   52590 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 19:16:59.009849   52590 config.go:182] Loaded profile config "default-k8s-diff-port-153528": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 19:16:59.009988   52590 config.go:182] Loaded profile config "embed-certs-991128": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 19:16:59.010089   52590 config.go:182] Loaded profile config "no-preload-247197": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0229 19:16:59.010161   52590 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 19:16:59.047718   52590 out.go:177] * Using the kvm2 driver based on user configuration
	I0229 19:16:59.048992   52590 start.go:299] selected driver: kvm2
	I0229 19:16:59.049008   52590 start.go:903] validating driver "kvm2" against <nil>
	I0229 19:16:59.049034   52590 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 19:16:59.050133   52590 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 19:16:59.050237   52590 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18259-6428/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 19:16:59.068920   52590 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 19:16:59.068976   52590 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	W0229 19:16:59.069020   52590 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0229 19:16:59.069294   52590 start_flags.go:950] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0229 19:16:59.069385   52590 cni.go:84] Creating CNI manager for ""
	I0229 19:16:59.069402   52590 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 19:16:59.069416   52590 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0229 19:16:59.069433   52590 start_flags.go:323] config:
	{Name:newest-cni-130594 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-130594 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 19:16:59.069616   52590 iso.go:125] acquiring lock: {Name:mk4b55cee5696fff78a1389276e4e011ad655e56 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 19:16:59.071805   52590 out.go:177] * Starting control plane node newest-cni-130594 in cluster newest-cni-130594
	I0229 19:16:59.073149   52590 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0229 19:16:59.073183   52590 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0229 19:16:59.073192   52590 cache.go:56] Caching tarball of preloaded images
	I0229 19:16:59.073274   52590 preload.go:174] Found /home/jenkins/minikube-integration/18259-6428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0229 19:16:59.073285   52590 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on crio
	I0229 19:16:59.073365   52590 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/newest-cni-130594/config.json ...
	I0229 19:16:59.073384   52590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/newest-cni-130594/config.json: {Name:mk3c0011dbfa18187928c8536e3b0cff4d138ff1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:16:59.073506   52590 start.go:365] acquiring machines lock for newest-cni-130594: {Name:mk08b242c6b6b7cd4a6843a7d3821a8b435540dd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 19:16:59.073533   52590 start.go:369] acquired machines lock for "newest-cni-130594" in 14.85µs
	I0229 19:16:59.073548   52590 start.go:93] Provisioning new machine with config: &{Name:newest-cni-130594 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-130594 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0229 19:16:59.073614   52590 start.go:125] createHost starting for "" (driver="kvm2")
	
	
	==> CRI-O <==
	Feb 29 19:16:59 default-k8s-diff-port-153528 crio[674]: time="2024-02-29 19:16:59.573864010Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709234219573841122,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f850e065-4174-481f-9123-8beb15fcd872 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 19:16:59 default-k8s-diff-port-153528 crio[674]: time="2024-02-29 19:16:59.574385685Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b370feee-1e15-4afc-8dc9-9a7b31bbe0a1 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:16:59 default-k8s-diff-port-153528 crio[674]: time="2024-02-29 19:16:59.574435792Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b370feee-1e15-4afc-8dc9-9a7b31bbe0a1 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:16:59 default-k8s-diff-port-153528 crio[674]: time="2024-02-29 19:16:59.574670604Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dd100a6a78ff38195f8c50e042b5378c2cd6e061b69c596fd75b45030c7abb3f,PodSandboxId:fa92f6f8dc963965dc09e7002094477c92b2ffb0bfdb58c6457fd36a3b6dbe1f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709233425069723905,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0525367f-c4e1-4d3e-945b-69f408e9fcb0,},Annotations:map[string]string{io.kubernetes.container.hash: 2f27b628,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3783ae6a7523e08992c41219eb196d6d50ec4a3033a63f5ac801053aac04cc3,PodSandboxId:d54922b282ed1ddf53773690fc9d42a5d43f36a492018247f212ce0335c0adec,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709233422804016064,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-fmptg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac14ccc5-53fb-41c6-b09a-bdb801f91088,},Annotations:map[string]string{io.kubernetes.container.hash: 760ceb5f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66a474fccaab4249c196a12339ffecf4b3a4a079d53db11cc6f1b249574da09f,PodSandboxId:7611ffeb0a2a37f9d736fb6beee564b901e5355493b9ffbda739259a64524150,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709233421592500015,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bvrxx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: b826c147-0486-405d-95c7-9b029349e27c,},Annotations:map[string]string{io.kubernetes.container.hash: a335adc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ad8f5f1b340cedf651a9032d3eb4e30640730f4284611f933e96f4ecf76ecff,PodSandboxId:e4243c26556d844011b66db88fdbe6db508424688d95cf1293c1855b53cf4016,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709233402721000236,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-153528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: cbfd49db3e5a72e0f323c7205da12bfe,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea63327422de95ad071cbb5dd8bd32fb4b052a80e014485ef75bf233da53bebf,PodSandboxId:eba21c4e573ce525969137ac5632ffa7e0806f5d50d138d6266963aa6f3cf388,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709233402667972238,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-153528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6333006f11b04aef2d656b0
7d9ad7aee,},Annotations:map[string]string{io.kubernetes.container.hash: cfae2ccb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afb68f5e908cebe734b371147974874028af491c708e5cb5198d1f84d7c1a1ec,PodSandboxId:5585157703fb8d1200d9fb3419298f22e63788f5e7642579a59af16a0aa4ee31,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709233402657225134,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-153528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 300cdbf38621f03273215bd34
d70f268,},Annotations:map[string]string{io.kubernetes.container.hash: 2226a314,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9076d6488b1c81a82f5384182b068e3cd06217027b2030717efdbf4b6df76a3,PodSandboxId:aca74cc915a027472b2d39ec7aa05b02ac93fc5c0648eb05a259392b62a497ed,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709233402543647561,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-153528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
5fe9c3d60541d7b57434b659717008ad,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b370feee-1e15-4afc-8dc9-9a7b31bbe0a1 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:16:59 default-k8s-diff-port-153528 crio[674]: time="2024-02-29 19:16:59.625728999Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=34d5de12-a9ec-4b18-9f9b-4522d27821c7 name=/runtime.v1.RuntimeService/Version
	Feb 29 19:16:59 default-k8s-diff-port-153528 crio[674]: time="2024-02-29 19:16:59.625827165Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=34d5de12-a9ec-4b18-9f9b-4522d27821c7 name=/runtime.v1.RuntimeService/Version
	Feb 29 19:16:59 default-k8s-diff-port-153528 crio[674]: time="2024-02-29 19:16:59.627221517Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=31ff92e1-507c-4592-9638-8c1d8223d8c5 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 19:16:59 default-k8s-diff-port-153528 crio[674]: time="2024-02-29 19:16:59.627826145Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709234219627800275,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=31ff92e1-507c-4592-9638-8c1d8223d8c5 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 19:16:59 default-k8s-diff-port-153528 crio[674]: time="2024-02-29 19:16:59.628733219Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=30813466-b137-4c4b-8119-e330c4345716 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:16:59 default-k8s-diff-port-153528 crio[674]: time="2024-02-29 19:16:59.628805425Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=30813466-b137-4c4b-8119-e330c4345716 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:16:59 default-k8s-diff-port-153528 crio[674]: time="2024-02-29 19:16:59.628973483Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dd100a6a78ff38195f8c50e042b5378c2cd6e061b69c596fd75b45030c7abb3f,PodSandboxId:fa92f6f8dc963965dc09e7002094477c92b2ffb0bfdb58c6457fd36a3b6dbe1f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709233425069723905,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0525367f-c4e1-4d3e-945b-69f408e9fcb0,},Annotations:map[string]string{io.kubernetes.container.hash: 2f27b628,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3783ae6a7523e08992c41219eb196d6d50ec4a3033a63f5ac801053aac04cc3,PodSandboxId:d54922b282ed1ddf53773690fc9d42a5d43f36a492018247f212ce0335c0adec,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709233422804016064,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-fmptg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac14ccc5-53fb-41c6-b09a-bdb801f91088,},Annotations:map[string]string{io.kubernetes.container.hash: 760ceb5f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66a474fccaab4249c196a12339ffecf4b3a4a079d53db11cc6f1b249574da09f,PodSandboxId:7611ffeb0a2a37f9d736fb6beee564b901e5355493b9ffbda739259a64524150,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709233421592500015,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bvrxx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: b826c147-0486-405d-95c7-9b029349e27c,},Annotations:map[string]string{io.kubernetes.container.hash: a335adc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ad8f5f1b340cedf651a9032d3eb4e30640730f4284611f933e96f4ecf76ecff,PodSandboxId:e4243c26556d844011b66db88fdbe6db508424688d95cf1293c1855b53cf4016,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709233402721000236,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-153528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: cbfd49db3e5a72e0f323c7205da12bfe,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea63327422de95ad071cbb5dd8bd32fb4b052a80e014485ef75bf233da53bebf,PodSandboxId:eba21c4e573ce525969137ac5632ffa7e0806f5d50d138d6266963aa6f3cf388,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709233402667972238,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-153528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6333006f11b04aef2d656b0
7d9ad7aee,},Annotations:map[string]string{io.kubernetes.container.hash: cfae2ccb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afb68f5e908cebe734b371147974874028af491c708e5cb5198d1f84d7c1a1ec,PodSandboxId:5585157703fb8d1200d9fb3419298f22e63788f5e7642579a59af16a0aa4ee31,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709233402657225134,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-153528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 300cdbf38621f03273215bd34
d70f268,},Annotations:map[string]string{io.kubernetes.container.hash: 2226a314,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9076d6488b1c81a82f5384182b068e3cd06217027b2030717efdbf4b6df76a3,PodSandboxId:aca74cc915a027472b2d39ec7aa05b02ac93fc5c0648eb05a259392b62a497ed,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709233402543647561,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-153528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
5fe9c3d60541d7b57434b659717008ad,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=30813466-b137-4c4b-8119-e330c4345716 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:16:59 default-k8s-diff-port-153528 crio[674]: time="2024-02-29 19:16:59.674530229Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c14950b8-bbd3-4d99-b526-882ca07d31f3 name=/runtime.v1.RuntimeService/Version
	Feb 29 19:16:59 default-k8s-diff-port-153528 crio[674]: time="2024-02-29 19:16:59.674714876Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c14950b8-bbd3-4d99-b526-882ca07d31f3 name=/runtime.v1.RuntimeService/Version
	Feb 29 19:16:59 default-k8s-diff-port-153528 crio[674]: time="2024-02-29 19:16:59.676750121Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e37ea36a-56d3-4f32-aea5-7c3dab6f7e12 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 19:16:59 default-k8s-diff-port-153528 crio[674]: time="2024-02-29 19:16:59.677211213Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709234219677181763,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e37ea36a-56d3-4f32-aea5-7c3dab6f7e12 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 19:16:59 default-k8s-diff-port-153528 crio[674]: time="2024-02-29 19:16:59.678181317Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6d1bdca5-5d75-4423-a6df-bcb9499f7675 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:16:59 default-k8s-diff-port-153528 crio[674]: time="2024-02-29 19:16:59.678408248Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6d1bdca5-5d75-4423-a6df-bcb9499f7675 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:16:59 default-k8s-diff-port-153528 crio[674]: time="2024-02-29 19:16:59.678636065Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dd100a6a78ff38195f8c50e042b5378c2cd6e061b69c596fd75b45030c7abb3f,PodSandboxId:fa92f6f8dc963965dc09e7002094477c92b2ffb0bfdb58c6457fd36a3b6dbe1f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709233425069723905,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0525367f-c4e1-4d3e-945b-69f408e9fcb0,},Annotations:map[string]string{io.kubernetes.container.hash: 2f27b628,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3783ae6a7523e08992c41219eb196d6d50ec4a3033a63f5ac801053aac04cc3,PodSandboxId:d54922b282ed1ddf53773690fc9d42a5d43f36a492018247f212ce0335c0adec,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709233422804016064,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-fmptg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac14ccc5-53fb-41c6-b09a-bdb801f91088,},Annotations:map[string]string{io.kubernetes.container.hash: 760ceb5f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66a474fccaab4249c196a12339ffecf4b3a4a079d53db11cc6f1b249574da09f,PodSandboxId:7611ffeb0a2a37f9d736fb6beee564b901e5355493b9ffbda739259a64524150,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709233421592500015,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bvrxx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: b826c147-0486-405d-95c7-9b029349e27c,},Annotations:map[string]string{io.kubernetes.container.hash: a335adc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ad8f5f1b340cedf651a9032d3eb4e30640730f4284611f933e96f4ecf76ecff,PodSandboxId:e4243c26556d844011b66db88fdbe6db508424688d95cf1293c1855b53cf4016,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709233402721000236,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-153528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: cbfd49db3e5a72e0f323c7205da12bfe,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea63327422de95ad071cbb5dd8bd32fb4b052a80e014485ef75bf233da53bebf,PodSandboxId:eba21c4e573ce525969137ac5632ffa7e0806f5d50d138d6266963aa6f3cf388,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709233402667972238,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-153528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6333006f11b04aef2d656b0
7d9ad7aee,},Annotations:map[string]string{io.kubernetes.container.hash: cfae2ccb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afb68f5e908cebe734b371147974874028af491c708e5cb5198d1f84d7c1a1ec,PodSandboxId:5585157703fb8d1200d9fb3419298f22e63788f5e7642579a59af16a0aa4ee31,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709233402657225134,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-153528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 300cdbf38621f03273215bd34
d70f268,},Annotations:map[string]string{io.kubernetes.container.hash: 2226a314,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9076d6488b1c81a82f5384182b068e3cd06217027b2030717efdbf4b6df76a3,PodSandboxId:aca74cc915a027472b2d39ec7aa05b02ac93fc5c0648eb05a259392b62a497ed,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709233402543647561,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-153528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
5fe9c3d60541d7b57434b659717008ad,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6d1bdca5-5d75-4423-a6df-bcb9499f7675 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:16:59 default-k8s-diff-port-153528 crio[674]: time="2024-02-29 19:16:59.723998495Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c646611b-78ed-4858-81b0-4eed1e23c2c7 name=/runtime.v1.RuntimeService/Version
	Feb 29 19:16:59 default-k8s-diff-port-153528 crio[674]: time="2024-02-29 19:16:59.724099846Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c646611b-78ed-4858-81b0-4eed1e23c2c7 name=/runtime.v1.RuntimeService/Version
	Feb 29 19:16:59 default-k8s-diff-port-153528 crio[674]: time="2024-02-29 19:16:59.727189079Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1e18a069-6da4-4552-97b5-4b0e09d9e2e7 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 19:16:59 default-k8s-diff-port-153528 crio[674]: time="2024-02-29 19:16:59.727702621Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709234219727677081,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1e18a069-6da4-4552-97b5-4b0e09d9e2e7 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 19:16:59 default-k8s-diff-port-153528 crio[674]: time="2024-02-29 19:16:59.728478681Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9a236366-da6e-431d-9658-8211a2070731 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:16:59 default-k8s-diff-port-153528 crio[674]: time="2024-02-29 19:16:59.728651102Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9a236366-da6e-431d-9658-8211a2070731 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:16:59 default-k8s-diff-port-153528 crio[674]: time="2024-02-29 19:16:59.728835764Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dd100a6a78ff38195f8c50e042b5378c2cd6e061b69c596fd75b45030c7abb3f,PodSandboxId:fa92f6f8dc963965dc09e7002094477c92b2ffb0bfdb58c6457fd36a3b6dbe1f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709233425069723905,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0525367f-c4e1-4d3e-945b-69f408e9fcb0,},Annotations:map[string]string{io.kubernetes.container.hash: 2f27b628,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3783ae6a7523e08992c41219eb196d6d50ec4a3033a63f5ac801053aac04cc3,PodSandboxId:d54922b282ed1ddf53773690fc9d42a5d43f36a492018247f212ce0335c0adec,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709233422804016064,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-fmptg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac14ccc5-53fb-41c6-b09a-bdb801f91088,},Annotations:map[string]string{io.kubernetes.container.hash: 760ceb5f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66a474fccaab4249c196a12339ffecf4b3a4a079d53db11cc6f1b249574da09f,PodSandboxId:7611ffeb0a2a37f9d736fb6beee564b901e5355493b9ffbda739259a64524150,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709233421592500015,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bvrxx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: b826c147-0486-405d-95c7-9b029349e27c,},Annotations:map[string]string{io.kubernetes.container.hash: a335adc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ad8f5f1b340cedf651a9032d3eb4e30640730f4284611f933e96f4ecf76ecff,PodSandboxId:e4243c26556d844011b66db88fdbe6db508424688d95cf1293c1855b53cf4016,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709233402721000236,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-153528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: cbfd49db3e5a72e0f323c7205da12bfe,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea63327422de95ad071cbb5dd8bd32fb4b052a80e014485ef75bf233da53bebf,PodSandboxId:eba21c4e573ce525969137ac5632ffa7e0806f5d50d138d6266963aa6f3cf388,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709233402667972238,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-153528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6333006f11b04aef2d656b0
7d9ad7aee,},Annotations:map[string]string{io.kubernetes.container.hash: cfae2ccb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afb68f5e908cebe734b371147974874028af491c708e5cb5198d1f84d7c1a1ec,PodSandboxId:5585157703fb8d1200d9fb3419298f22e63788f5e7642579a59af16a0aa4ee31,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709233402657225134,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-153528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 300cdbf38621f03273215bd34
d70f268,},Annotations:map[string]string{io.kubernetes.container.hash: 2226a314,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9076d6488b1c81a82f5384182b068e3cd06217027b2030717efdbf4b6df76a3,PodSandboxId:aca74cc915a027472b2d39ec7aa05b02ac93fc5c0648eb05a259392b62a497ed,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709233402543647561,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-153528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
5fe9c3d60541d7b57434b659717008ad,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9a236366-da6e-431d-9658-8211a2070731 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	dd100a6a78ff3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 minutes ago      Running             storage-provisioner       0                   fa92f6f8dc963       storage-provisioner
	f3783ae6a7523       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   13 minutes ago      Running             coredns                   0                   d54922b282ed1       coredns-5dd5756b68-fmptg
	66a474fccaab4       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   13 minutes ago      Running             kube-proxy                0                   7611ffeb0a2a3       kube-proxy-bvrxx
	7ad8f5f1b340c       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   13 minutes ago      Running             kube-scheduler            2                   e4243c26556d8       kube-scheduler-default-k8s-diff-port-153528
	ea63327422de9       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   13 minutes ago      Running             etcd                      2                   eba21c4e573ce       etcd-default-k8s-diff-port-153528
	afb68f5e908ce       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   13 minutes ago      Running             kube-apiserver            2                   5585157703fb8       kube-apiserver-default-k8s-diff-port-153528
	f9076d6488b1c       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   13 minutes ago      Running             kube-controller-manager   2                   aca74cc915a02       kube-controller-manager-default-k8s-diff-port-153528
	
	
	==> coredns [f3783ae6a7523e08992c41219eb196d6d50ec4a3033a63f5ac801053aac04cc3] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	[INFO] 127.0.0.1:39705 - 48666 "HINFO IN 6790378613609168493.1271217274832031905. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014988537s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-153528
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-153528
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f9e09574fdbf0719156a1e892f7aeb8b71f0cf19
	                    minikube.k8s.io/name=default-k8s-diff-port-153528
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_29T19_03_29_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Feb 2024 19:03:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-153528
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Feb 2024 19:16:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Feb 2024 19:14:01 +0000   Thu, 29 Feb 2024 19:03:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Feb 2024 19:14:01 +0000   Thu, 29 Feb 2024 19:03:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Feb 2024 19:14:01 +0000   Thu, 29 Feb 2024 19:03:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Feb 2024 19:14:01 +0000   Thu, 29 Feb 2024 19:03:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.210
	  Hostname:    default-k8s-diff-port-153528
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 aad8c663d8bf4a83b64ea1f43ab2b7c3
	  System UUID:                aad8c663-d8bf-4a83-b64e-a1f43ab2b7c3
	  Boot ID:                    cdea6de5-2171-467a-b107-96f0c7ab4b21
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-fmptg                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-default-k8s-diff-port-153528                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kube-apiserver-default-k8s-diff-port-153528             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-153528    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-bvrxx                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-default-k8s-diff-port-153528             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 metrics-server-57f55c9bc5-v95ws                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-153528 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-153528 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node default-k8s-diff-port-153528 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  13m                kubelet          Node default-k8s-diff-port-153528 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m                kubelet          Node default-k8s-diff-port-153528 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m                kubelet          Node default-k8s-diff-port-153528 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeNotReady             13m                kubelet          Node default-k8s-diff-port-153528 status is now: NodeNotReady
	  Normal  NodeReady                13m                kubelet          Node default-k8s-diff-port-153528 status is now: NodeReady
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           13m                node-controller  Node default-k8s-diff-port-153528 event: Registered Node default-k8s-diff-port-153528 in Controller
	
	
	==> dmesg <==
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.055202] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043937] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.052603] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Feb29 18:58] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.679083] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.009844] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.065816] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065492] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.194837] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.141990] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.318484] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[ +17.262167] systemd-fstab-generator[874]: Ignoring "noauto" option for root device
	[  +0.073820] kauditd_printk_skb: 130 callbacks suppressed
	[  +5.658975] kauditd_printk_skb: 72 callbacks suppressed
	[  +6.447092] kauditd_printk_skb: 69 callbacks suppressed
	[Feb29 19:03] kauditd_printk_skb: 7 callbacks suppressed
	[  +1.326145] systemd-fstab-generator[3377]: Ignoring "noauto" option for root device
	[  +7.285665] systemd-fstab-generator[3698]: Ignoring "noauto" option for root device
	[  +0.114066] kauditd_printk_skb: 53 callbacks suppressed
	[ +12.601674] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.006417] kauditd_printk_skb: 62 callbacks suppressed
	[Feb29 19:15] hrtimer: interrupt took 5622851 ns
	
	
	==> etcd [ea63327422de95ad071cbb5dd8bd32fb4b052a80e014485ef75bf233da53bebf] <==
	{"level":"info","ts":"2024-02-29T19:03:23.197229Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.210:2380"}
	{"level":"info","ts":"2024-02-29T19:03:23.205137Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.210:2380"}
	{"level":"info","ts":"2024-02-29T19:03:23.197641Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"5a5dd032def1271d","initial-advertise-peer-urls":["https://192.168.39.210:2380"],"listen-peer-urls":["https://192.168.39.210:2380"],"advertise-client-urls":["https://192.168.39.210:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.210:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-02-29T19:03:23.197872Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-02-29T19:03:23.218616Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5a5dd032def1271d is starting a new election at term 1"}
	{"level":"info","ts":"2024-02-29T19:03:23.21866Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5a5dd032def1271d became pre-candidate at term 1"}
	{"level":"info","ts":"2024-02-29T19:03:23.218674Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5a5dd032def1271d received MsgPreVoteResp from 5a5dd032def1271d at term 1"}
	{"level":"info","ts":"2024-02-29T19:03:23.218683Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5a5dd032def1271d became candidate at term 2"}
	{"level":"info","ts":"2024-02-29T19:03:23.218689Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5a5dd032def1271d received MsgVoteResp from 5a5dd032def1271d at term 2"}
	{"level":"info","ts":"2024-02-29T19:03:23.218697Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5a5dd032def1271d became leader at term 2"}
	{"level":"info","ts":"2024-02-29T19:03:23.218703Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 5a5dd032def1271d elected leader 5a5dd032def1271d at term 2"}
	{"level":"info","ts":"2024-02-29T19:03:23.222749Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"5a5dd032def1271d","local-member-attributes":"{Name:default-k8s-diff-port-153528 ClientURLs:[https://192.168.39.210:2379]}","request-path":"/0/members/5a5dd032def1271d/attributes","cluster-id":"989b3f6bb1f1f8ce","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-29T19:03:23.222798Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T19:03:23.226962Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-29T19:03:23.227054Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T19:03:23.22967Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"989b3f6bb1f1f8ce","local-member-id":"5a5dd032def1271d","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T19:03:23.229786Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T19:03:23.229839Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T19:03:23.229866Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T19:03:23.235197Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.210:2379"}
	{"level":"info","ts":"2024-02-29T19:03:23.241818Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-29T19:03:23.246919Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-29T19:13:23.663882Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":725}
	{"level":"info","ts":"2024-02-29T19:13:23.666265Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":725,"took":"2.027129ms","hash":4116224803}
	{"level":"info","ts":"2024-02-29T19:13:23.66633Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4116224803,"revision":725,"compact-revision":-1}
	
	
	==> kernel <==
	 19:17:00 up 19 min,  0 users,  load average: 0.47, 0.30, 0.20
	Linux default-k8s-diff-port-153528 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [afb68f5e908cebe734b371147974874028af491c708e5cb5198d1f84d7c1a1ec] <==
	I0229 19:13:25.669409       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0229 19:13:26.669939       1 handler_proxy.go:93] no RequestInfo found in the context
	W0229 19:13:26.669945       1 handler_proxy.go:93] no RequestInfo found in the context
	E0229 19:13:26.670015       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0229 19:13:26.670176       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0229 19:13:26.670193       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0229 19:13:26.672180       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0229 19:14:25.533061       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0229 19:14:26.671273       1 handler_proxy.go:93] no RequestInfo found in the context
	E0229 19:14:26.671422       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0229 19:14:26.671457       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0229 19:14:26.672701       1 handler_proxy.go:93] no RequestInfo found in the context
	E0229 19:14:26.672839       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0229 19:14:26.672869       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0229 19:15:25.531857       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0229 19:16:25.532131       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0229 19:16:26.672293       1 handler_proxy.go:93] no RequestInfo found in the context
	E0229 19:16:26.672501       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0229 19:16:26.672645       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0229 19:16:26.673212       1 handler_proxy.go:93] no RequestInfo found in the context
	E0229 19:16:26.673405       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0229 19:16:26.674596       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [f9076d6488b1c81a82f5384182b068e3cd06217027b2030717efdbf4b6df76a3] <==
	I0229 19:11:11.215094       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 19:11:40.719045       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 19:11:41.224059       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 19:12:10.725839       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 19:12:11.233267       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 19:12:40.732366       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 19:12:41.242109       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 19:13:10.738403       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 19:13:11.250854       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 19:13:40.745870       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 19:13:41.260521       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 19:14:10.752330       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 19:14:11.270148       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 19:14:40.757840       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 19:14:41.279249       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0229 19:15:02.336669       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="356.074µs"
	E0229 19:15:10.765987       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 19:15:11.289410       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0229 19:15:17.335776       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="161.904µs"
	E0229 19:15:40.772950       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 19:15:41.299193       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 19:16:10.781429       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 19:16:11.308122       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 19:16:40.787797       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 19:16:41.321980       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [66a474fccaab4249c196a12339ffecf4b3a4a079d53db11cc6f1b249574da09f] <==
	I0229 19:03:42.067048       1 server_others.go:69] "Using iptables proxy"
	I0229 19:03:42.086992       1 node.go:141] Successfully retrieved node IP: 192.168.39.210
	I0229 19:03:42.159694       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0229 19:03:42.159744       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0229 19:03:42.174504       1 server_others.go:152] "Using iptables Proxier"
	I0229 19:03:42.174692       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0229 19:03:42.174926       1 server.go:846] "Version info" version="v1.28.4"
	I0229 19:03:42.174936       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0229 19:03:42.184252       1 config.go:188] "Starting service config controller"
	I0229 19:03:42.184266       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0229 19:03:42.184375       1 config.go:97] "Starting endpoint slice config controller"
	I0229 19:03:42.184380       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0229 19:03:42.197882       1 config.go:315] "Starting node config controller"
	I0229 19:03:42.197970       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0229 19:03:42.286749       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0229 19:03:42.286792       1 shared_informer.go:318] Caches are synced for service config
	I0229 19:03:42.301431       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [7ad8f5f1b340cedf651a9032d3eb4e30640730f4284611f933e96f4ecf76ecff] <==
	W0229 19:03:26.600990       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0229 19:03:26.601061       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0229 19:03:26.660906       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0229 19:03:26.661039       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0229 19:03:26.724353       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0229 19:03:26.724410       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0229 19:03:26.752810       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0229 19:03:26.752890       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0229 19:03:26.753042       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0229 19:03:26.753092       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0229 19:03:26.781010       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0229 19:03:26.781062       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0229 19:03:26.783731       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0229 19:03:26.784212       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0229 19:03:26.896147       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0229 19:03:26.896203       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0229 19:03:26.924356       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0229 19:03:26.924509       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0229 19:03:26.949294       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0229 19:03:26.949348       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0229 19:03:26.952405       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0229 19:03:26.952455       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0229 19:03:26.954305       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0229 19:03:26.954350       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0229 19:03:29.272066       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 29 19:14:37 default-k8s-diff-port-153528 kubelet[3705]: E0229 19:14:37.315826    3705 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-v95ws" podUID="e3545189-e705-4d6e-bab6-e1eceba83c0f"
	Feb 29 19:14:50 default-k8s-diff-port-153528 kubelet[3705]: E0229 19:14:50.343790    3705 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Feb 29 19:14:50 default-k8s-diff-port-153528 kubelet[3705]: E0229 19:14:50.344100    3705 kuberuntime_image.go:53] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Feb 29 19:14:50 default-k8s-diff-port-153528 kubelet[3705]: E0229 19:14:50.344789    3705 kuberuntime_manager.go:1261] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-f9tzx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe
:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessa
gePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-v95ws_kube-system(e3545189-e705-4d6e-bab6-e1eceba83c0f): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Feb 29 19:14:50 default-k8s-diff-port-153528 kubelet[3705]: E0229 19:14:50.344902    3705 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-v95ws" podUID="e3545189-e705-4d6e-bab6-e1eceba83c0f"
	Feb 29 19:15:02 default-k8s-diff-port-153528 kubelet[3705]: E0229 19:15:02.316958    3705 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-v95ws" podUID="e3545189-e705-4d6e-bab6-e1eceba83c0f"
	Feb 29 19:15:17 default-k8s-diff-port-153528 kubelet[3705]: E0229 19:15:17.317368    3705 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-v95ws" podUID="e3545189-e705-4d6e-bab6-e1eceba83c0f"
	Feb 29 19:15:28 default-k8s-diff-port-153528 kubelet[3705]: E0229 19:15:28.317752    3705 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-v95ws" podUID="e3545189-e705-4d6e-bab6-e1eceba83c0f"
	Feb 29 19:15:29 default-k8s-diff-port-153528 kubelet[3705]: E0229 19:15:29.415899    3705 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 29 19:15:29 default-k8s-diff-port-153528 kubelet[3705]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 29 19:15:29 default-k8s-diff-port-153528 kubelet[3705]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 29 19:15:29 default-k8s-diff-port-153528 kubelet[3705]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 29 19:15:29 default-k8s-diff-port-153528 kubelet[3705]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 29 19:15:41 default-k8s-diff-port-153528 kubelet[3705]: E0229 19:15:41.316517    3705 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-v95ws" podUID="e3545189-e705-4d6e-bab6-e1eceba83c0f"
	Feb 29 19:15:53 default-k8s-diff-port-153528 kubelet[3705]: E0229 19:15:53.318270    3705 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-v95ws" podUID="e3545189-e705-4d6e-bab6-e1eceba83c0f"
	Feb 29 19:16:06 default-k8s-diff-port-153528 kubelet[3705]: E0229 19:16:06.317190    3705 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-v95ws" podUID="e3545189-e705-4d6e-bab6-e1eceba83c0f"
	Feb 29 19:16:20 default-k8s-diff-port-153528 kubelet[3705]: E0229 19:16:20.317890    3705 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-v95ws" podUID="e3545189-e705-4d6e-bab6-e1eceba83c0f"
	Feb 29 19:16:29 default-k8s-diff-port-153528 kubelet[3705]: E0229 19:16:29.413476    3705 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 29 19:16:29 default-k8s-diff-port-153528 kubelet[3705]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 29 19:16:29 default-k8s-diff-port-153528 kubelet[3705]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 29 19:16:29 default-k8s-diff-port-153528 kubelet[3705]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 29 19:16:29 default-k8s-diff-port-153528 kubelet[3705]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 29 19:16:31 default-k8s-diff-port-153528 kubelet[3705]: E0229 19:16:31.315735    3705 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-v95ws" podUID="e3545189-e705-4d6e-bab6-e1eceba83c0f"
	Feb 29 19:16:43 default-k8s-diff-port-153528 kubelet[3705]: E0229 19:16:43.316035    3705 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-v95ws" podUID="e3545189-e705-4d6e-bab6-e1eceba83c0f"
	Feb 29 19:16:56 default-k8s-diff-port-153528 kubelet[3705]: E0229 19:16:56.316213    3705 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-v95ws" podUID="e3545189-e705-4d6e-bab6-e1eceba83c0f"
	
	
	==> storage-provisioner [dd100a6a78ff38195f8c50e042b5378c2cd6e061b69c596fd75b45030c7abb3f] <==
	I0229 19:03:45.174847       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0229 19:03:45.186089       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0229 19:03:45.186163       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0229 19:03:45.200030       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0229 19:03:45.200498       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-153528_ddfb39b0-3f56-44c1-9c0e-69ce7f38107d!
	I0229 19:03:45.201320       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f1708b3d-d235-4b3f-984d-84b1219f20cb", APIVersion:"v1", ResourceVersion:"461", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-153528_ddfb39b0-3f56-44c1-9c0e-69ce7f38107d became leader
	I0229 19:03:45.301708       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-153528_ddfb39b0-3f56-44c1-9c0e-69ce7f38107d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-153528 -n default-k8s-diff-port-153528
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-153528 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-v95ws
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-153528 describe pod metrics-server-57f55c9bc5-v95ws
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-153528 describe pod metrics-server-57f55c9bc5-v95ws: exit status 1 (71.862682ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-v95ws" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-153528 describe pod metrics-server-57f55c9bc5-v95ws: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.65s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-247197 -n no-preload-247197
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-02-29 19:17:36.109243078 +0000 UTC m=+6013.604226862
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-247197 -n no-preload-247197
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-247197 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-247197 logs -n 25: (1.415958727s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-247197                                   | no-preload-247197            | jenkins | v1.32.0 | 29 Feb 24 18:47 UTC | 29 Feb 24 18:49 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p pause-848791                                        | pause-848791                 | jenkins | v1.32.0 | 29 Feb 24 18:48 UTC | 29 Feb 24 18:48 UTC |
	| start   | -p embed-certs-991128                                  | embed-certs-991128           | jenkins | v1.32.0 | 29 Feb 24 18:48 UTC | 29 Feb 24 18:49 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-393248                              | cert-expiration-393248       | jenkins | v1.32.0 | 29 Feb 24 18:49 UTC | 29 Feb 24 18:49 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-393248                              | cert-expiration-393248       | jenkins | v1.32.0 | 29 Feb 24 18:49 UTC | 29 Feb 24 18:49 UTC |
	| delete  | -p                                                     | disable-driver-mounts-599421 | jenkins | v1.32.0 | 29 Feb 24 18:49 UTC | 29 Feb 24 18:49 UTC |
	|         | disable-driver-mounts-599421                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-153528 | jenkins | v1.32.0 | 29 Feb 24 18:49 UTC | 29 Feb 24 18:50 UTC |
	|         | default-k8s-diff-port-153528                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-247197             | no-preload-247197            | jenkins | v1.32.0 | 29 Feb 24 18:49 UTC | 29 Feb 24 18:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-247197                                   | no-preload-247197            | jenkins | v1.32.0 | 29 Feb 24 18:49 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-991128            | embed-certs-991128           | jenkins | v1.32.0 | 29 Feb 24 18:50 UTC | 29 Feb 24 18:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-991128                                  | embed-certs-991128           | jenkins | v1.32.0 | 29 Feb 24 18:50 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-153528  | default-k8s-diff-port-153528 | jenkins | v1.32.0 | 29 Feb 24 18:51 UTC | 29 Feb 24 18:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-153528 | jenkins | v1.32.0 | 29 Feb 24 18:51 UTC |                     |
	|         | default-k8s-diff-port-153528                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-631080        | old-k8s-version-631080       | jenkins | v1.32.0 | 29 Feb 24 18:51 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-247197                  | no-preload-247197            | jenkins | v1.32.0 | 29 Feb 24 18:52 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-991128                 | embed-certs-991128           | jenkins | v1.32.0 | 29 Feb 24 18:52 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-247197                                   | no-preload-247197            | jenkins | v1.32.0 | 29 Feb 24 18:52 UTC | 29 Feb 24 19:08 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| start   | -p embed-certs-991128                                  | embed-certs-991128           | jenkins | v1.32.0 | 29 Feb 24 18:52 UTC | 29 Feb 24 19:07 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-631080                              | old-k8s-version-631080       | jenkins | v1.32.0 | 29 Feb 24 18:53 UTC | 29 Feb 24 18:53 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-631080             | old-k8s-version-631080       | jenkins | v1.32.0 | 29 Feb 24 18:53 UTC | 29 Feb 24 18:53 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-631080                              | old-k8s-version-631080       | jenkins | v1.32.0 | 29 Feb 24 18:53 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-153528       | default-k8s-diff-port-153528 | jenkins | v1.32.0 | 29 Feb 24 18:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-153528 | jenkins | v1.32.0 | 29 Feb 24 18:53 UTC | 29 Feb 24 19:07 UTC |
	|         | default-k8s-diff-port-153528                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-631080                              | old-k8s-version-631080       | jenkins | v1.32.0 | 29 Feb 24 19:16 UTC | 29 Feb 24 19:16 UTC |
	| start   | -p newest-cni-130594 --memory=2200 --alsologtostderr   | newest-cni-130594            | jenkins | v1.32.0 | 29 Feb 24 19:16 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 19:16:58
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 19:16:58.995744   52590 out.go:291] Setting OutFile to fd 1 ...
	I0229 19:16:58.996307   52590 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 19:16:58.996327   52590 out.go:304] Setting ErrFile to fd 2...
	I0229 19:16:58.996334   52590 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 19:16:58.996770   52590 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6428/.minikube/bin
	I0229 19:16:58.997864   52590 out.go:298] Setting JSON to false
	I0229 19:16:58.998729   52590 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7163,"bootTime":1709227056,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 19:16:58.998808   52590 start.go:139] virtualization: kvm guest
	I0229 19:16:59.000879   52590 out.go:177] * [newest-cni-130594] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 19:16:59.002082   52590 notify.go:220] Checking for updates...
	I0229 19:16:59.002109   52590 out.go:177]   - MINIKUBE_LOCATION=18259
	I0229 19:16:59.003375   52590 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 19:16:59.004595   52590 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18259-6428/kubeconfig
	I0229 19:16:59.005809   52590 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6428/.minikube
	I0229 19:16:59.007062   52590 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 19:16:59.008233   52590 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 19:16:59.009849   52590 config.go:182] Loaded profile config "default-k8s-diff-port-153528": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 19:16:59.009988   52590 config.go:182] Loaded profile config "embed-certs-991128": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 19:16:59.010089   52590 config.go:182] Loaded profile config "no-preload-247197": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0229 19:16:59.010161   52590 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 19:16:59.047718   52590 out.go:177] * Using the kvm2 driver based on user configuration
	I0229 19:16:59.048992   52590 start.go:299] selected driver: kvm2
	I0229 19:16:59.049008   52590 start.go:903] validating driver "kvm2" against <nil>
	I0229 19:16:59.049034   52590 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 19:16:59.050133   52590 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 19:16:59.050237   52590 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18259-6428/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 19:16:59.068920   52590 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 19:16:59.068976   52590 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	W0229 19:16:59.069020   52590 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0229 19:16:59.069294   52590 start_flags.go:950] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0229 19:16:59.069385   52590 cni.go:84] Creating CNI manager for ""
	I0229 19:16:59.069402   52590 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 19:16:59.069416   52590 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0229 19:16:59.069433   52590 start_flags.go:323] config:
	{Name:newest-cni-130594 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-130594 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 19:16:59.069616   52590 iso.go:125] acquiring lock: {Name:mk4b55cee5696fff78a1389276e4e011ad655e56 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 19:16:59.071805   52590 out.go:177] * Starting control plane node newest-cni-130594 in cluster newest-cni-130594
	I0229 19:16:59.073149   52590 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0229 19:16:59.073183   52590 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0229 19:16:59.073192   52590 cache.go:56] Caching tarball of preloaded images
	I0229 19:16:59.073274   52590 preload.go:174] Found /home/jenkins/minikube-integration/18259-6428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0229 19:16:59.073285   52590 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on crio
	I0229 19:16:59.073365   52590 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/newest-cni-130594/config.json ...
	I0229 19:16:59.073384   52590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/newest-cni-130594/config.json: {Name:mk3c0011dbfa18187928c8536e3b0cff4d138ff1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:16:59.073506   52590 start.go:365] acquiring machines lock for newest-cni-130594: {Name:mk08b242c6b6b7cd4a6843a7d3821a8b435540dd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 19:16:59.073533   52590 start.go:369] acquired machines lock for "newest-cni-130594" in 14.85µs
	I0229 19:16:59.073548   52590 start.go:93] Provisioning new machine with config: &{Name:newest-cni-130594 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-130594 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0229 19:16:59.073614   52590 start.go:125] createHost starting for "" (driver="kvm2")
	I0229 19:16:59.075245   52590 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0229 19:16:59.075385   52590 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:16:59.075425   52590 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:16:59.090514   52590 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39649
	I0229 19:16:59.091034   52590 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:16:59.091647   52590 main.go:141] libmachine: Using API Version  1
	I0229 19:16:59.091667   52590 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:16:59.091980   52590 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:16:59.092186   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetMachineName
	I0229 19:16:59.092385   52590 main.go:141] libmachine: (newest-cni-130594) Calling .DriverName
	I0229 19:16:59.092555   52590 start.go:159] libmachine.API.Create for "newest-cni-130594" (driver="kvm2")
	I0229 19:16:59.092596   52590 client.go:168] LocalClient.Create starting
	I0229 19:16:59.092645   52590 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem
	I0229 19:16:59.092709   52590 main.go:141] libmachine: Decoding PEM data...
	I0229 19:16:59.092739   52590 main.go:141] libmachine: Parsing certificate...
	I0229 19:16:59.092824   52590 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem
	I0229 19:16:59.092856   52590 main.go:141] libmachine: Decoding PEM data...
	I0229 19:16:59.092878   52590 main.go:141] libmachine: Parsing certificate...
	I0229 19:16:59.092907   52590 main.go:141] libmachine: Running pre-create checks...
	I0229 19:16:59.092928   52590 main.go:141] libmachine: (newest-cni-130594) Calling .PreCreateCheck
	I0229 19:16:59.093276   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetConfigRaw
	I0229 19:16:59.093734   52590 main.go:141] libmachine: Creating machine...
	I0229 19:16:59.093752   52590 main.go:141] libmachine: (newest-cni-130594) Calling .Create
	I0229 19:16:59.093910   52590 main.go:141] libmachine: (newest-cni-130594) Creating KVM machine...
	I0229 19:16:59.095186   52590 main.go:141] libmachine: (newest-cni-130594) DBG | found existing default KVM network
	I0229 19:16:59.096522   52590 main.go:141] libmachine: (newest-cni-130594) DBG | I0229 19:16:59.096383   52618 network.go:212] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:3f:fc:f9} reservation:<nil>}
	I0229 19:16:59.097274   52590 main.go:141] libmachine: (newest-cni-130594) DBG | I0229 19:16:59.097217   52618 network.go:212] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:24:6b:02} reservation:<nil>}
	I0229 19:16:59.098193   52590 main.go:141] libmachine: (newest-cni-130594) DBG | I0229 19:16:59.098146   52618 network.go:212] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:5b:27:eb} reservation:<nil>}
	I0229 19:16:59.099322   52590 main.go:141] libmachine: (newest-cni-130594) DBG | I0229 19:16:59.099249   52618 network.go:207] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000384fb0}
	I0229 19:16:59.104597   52590 main.go:141] libmachine: (newest-cni-130594) DBG | trying to create private KVM network mk-newest-cni-130594 192.168.72.0/24...
	I0229 19:16:59.195393   52590 main.go:141] libmachine: (newest-cni-130594) DBG | private KVM network mk-newest-cni-130594 192.168.72.0/24 created
	I0229 19:16:59.195451   52590 main.go:141] libmachine: (newest-cni-130594) DBG | I0229 19:16:59.195370   52618 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18259-6428/.minikube
	I0229 19:16:59.195477   52590 main.go:141] libmachine: (newest-cni-130594) Setting up store path in /home/jenkins/minikube-integration/18259-6428/.minikube/machines/newest-cni-130594 ...
	I0229 19:16:59.195492   52590 main.go:141] libmachine: (newest-cni-130594) Building disk image from file:///home/jenkins/minikube-integration/18259-6428/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso
	I0229 19:16:59.195532   52590 main.go:141] libmachine: (newest-cni-130594) Downloading /home/jenkins/minikube-integration/18259-6428/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18259-6428/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0229 19:16:59.427706   52590 main.go:141] libmachine: (newest-cni-130594) DBG | I0229 19:16:59.427577   52618 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/newest-cni-130594/id_rsa...
	I0229 19:16:59.780165   52590 main.go:141] libmachine: (newest-cni-130594) DBG | I0229 19:16:59.780037   52618 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/newest-cni-130594/newest-cni-130594.rawdisk...
	I0229 19:16:59.780192   52590 main.go:141] libmachine: (newest-cni-130594) DBG | Writing magic tar header
	I0229 19:16:59.780210   52590 main.go:141] libmachine: (newest-cni-130594) DBG | Writing SSH key tar header
	I0229 19:16:59.780230   52590 main.go:141] libmachine: (newest-cni-130594) DBG | I0229 19:16:59.780169   52618 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18259-6428/.minikube/machines/newest-cni-130594 ...
	I0229 19:16:59.780333   52590 main.go:141] libmachine: (newest-cni-130594) Setting executable bit set on /home/jenkins/minikube-integration/18259-6428/.minikube/machines/newest-cni-130594 (perms=drwx------)
	I0229 19:16:59.780352   52590 main.go:141] libmachine: (newest-cni-130594) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/newest-cni-130594
	I0229 19:16:59.780360   52590 main.go:141] libmachine: (newest-cni-130594) Setting executable bit set on /home/jenkins/minikube-integration/18259-6428/.minikube/machines (perms=drwxr-xr-x)
	I0229 19:16:59.780376   52590 main.go:141] libmachine: (newest-cni-130594) Setting executable bit set on /home/jenkins/minikube-integration/18259-6428/.minikube (perms=drwxr-xr-x)
	I0229 19:16:59.780391   52590 main.go:141] libmachine: (newest-cni-130594) Setting executable bit set on /home/jenkins/minikube-integration/18259-6428 (perms=drwxrwxr-x)
	I0229 19:16:59.780412   52590 main.go:141] libmachine: (newest-cni-130594) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6428/.minikube/machines
	I0229 19:16:59.780425   52590 main.go:141] libmachine: (newest-cni-130594) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0229 19:16:59.780440   52590 main.go:141] libmachine: (newest-cni-130594) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6428/.minikube
	I0229 19:16:59.780449   52590 main.go:141] libmachine: (newest-cni-130594) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0229 19:16:59.780465   52590 main.go:141] libmachine: (newest-cni-130594) Creating domain...
	I0229 19:16:59.780530   52590 main.go:141] libmachine: (newest-cni-130594) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6428
	I0229 19:16:59.780560   52590 main.go:141] libmachine: (newest-cni-130594) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0229 19:16:59.780572   52590 main.go:141] libmachine: (newest-cni-130594) DBG | Checking permissions on dir: /home/jenkins
	I0229 19:16:59.780581   52590 main.go:141] libmachine: (newest-cni-130594) DBG | Checking permissions on dir: /home
	I0229 19:16:59.780589   52590 main.go:141] libmachine: (newest-cni-130594) DBG | Skipping /home - not owner
	I0229 19:16:59.781516   52590 main.go:141] libmachine: (newest-cni-130594) define libvirt domain using xml: 
	I0229 19:16:59.781538   52590 main.go:141] libmachine: (newest-cni-130594) <domain type='kvm'>
	I0229 19:16:59.781563   52590 main.go:141] libmachine: (newest-cni-130594)   <name>newest-cni-130594</name>
	I0229 19:16:59.781596   52590 main.go:141] libmachine: (newest-cni-130594)   <memory unit='MiB'>2200</memory>
	I0229 19:16:59.781610   52590 main.go:141] libmachine: (newest-cni-130594)   <vcpu>2</vcpu>
	I0229 19:16:59.781620   52590 main.go:141] libmachine: (newest-cni-130594)   <features>
	I0229 19:16:59.781629   52590 main.go:141] libmachine: (newest-cni-130594)     <acpi/>
	I0229 19:16:59.781640   52590 main.go:141] libmachine: (newest-cni-130594)     <apic/>
	I0229 19:16:59.781677   52590 main.go:141] libmachine: (newest-cni-130594)     <pae/>
	I0229 19:16:59.781698   52590 main.go:141] libmachine: (newest-cni-130594)     
	I0229 19:16:59.781724   52590 main.go:141] libmachine: (newest-cni-130594)   </features>
	I0229 19:16:59.781743   52590 main.go:141] libmachine: (newest-cni-130594)   <cpu mode='host-passthrough'>
	I0229 19:16:59.781755   52590 main.go:141] libmachine: (newest-cni-130594)   
	I0229 19:16:59.781771   52590 main.go:141] libmachine: (newest-cni-130594)   </cpu>
	I0229 19:16:59.781790   52590 main.go:141] libmachine: (newest-cni-130594)   <os>
	I0229 19:16:59.781823   52590 main.go:141] libmachine: (newest-cni-130594)     <type>hvm</type>
	I0229 19:16:59.781836   52590 main.go:141] libmachine: (newest-cni-130594)     <boot dev='cdrom'/>
	I0229 19:16:59.781844   52590 main.go:141] libmachine: (newest-cni-130594)     <boot dev='hd'/>
	I0229 19:16:59.781856   52590 main.go:141] libmachine: (newest-cni-130594)     <bootmenu enable='no'/>
	I0229 19:16:59.781865   52590 main.go:141] libmachine: (newest-cni-130594)   </os>
	I0229 19:16:59.781876   52590 main.go:141] libmachine: (newest-cni-130594)   <devices>
	I0229 19:16:59.781884   52590 main.go:141] libmachine: (newest-cni-130594)     <disk type='file' device='cdrom'>
	I0229 19:16:59.781897   52590 main.go:141] libmachine: (newest-cni-130594)       <source file='/home/jenkins/minikube-integration/18259-6428/.minikube/machines/newest-cni-130594/boot2docker.iso'/>
	I0229 19:16:59.781908   52590 main.go:141] libmachine: (newest-cni-130594)       <target dev='hdc' bus='scsi'/>
	I0229 19:16:59.781927   52590 main.go:141] libmachine: (newest-cni-130594)       <readonly/>
	I0229 19:16:59.781945   52590 main.go:141] libmachine: (newest-cni-130594)     </disk>
	I0229 19:16:59.781955   52590 main.go:141] libmachine: (newest-cni-130594)     <disk type='file' device='disk'>
	I0229 19:16:59.781979   52590 main.go:141] libmachine: (newest-cni-130594)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0229 19:16:59.781997   52590 main.go:141] libmachine: (newest-cni-130594)       <source file='/home/jenkins/minikube-integration/18259-6428/.minikube/machines/newest-cni-130594/newest-cni-130594.rawdisk'/>
	I0229 19:16:59.782010   52590 main.go:141] libmachine: (newest-cni-130594)       <target dev='hda' bus='virtio'/>
	I0229 19:16:59.782018   52590 main.go:141] libmachine: (newest-cni-130594)     </disk>
	I0229 19:16:59.782033   52590 main.go:141] libmachine: (newest-cni-130594)     <interface type='network'>
	I0229 19:16:59.782059   52590 main.go:141] libmachine: (newest-cni-130594)       <source network='mk-newest-cni-130594'/>
	I0229 19:16:59.782085   52590 main.go:141] libmachine: (newest-cni-130594)       <model type='virtio'/>
	I0229 19:16:59.782098   52590 main.go:141] libmachine: (newest-cni-130594)     </interface>
	I0229 19:16:59.782108   52590 main.go:141] libmachine: (newest-cni-130594)     <interface type='network'>
	I0229 19:16:59.782122   52590 main.go:141] libmachine: (newest-cni-130594)       <source network='default'/>
	I0229 19:16:59.782134   52590 main.go:141] libmachine: (newest-cni-130594)       <model type='virtio'/>
	I0229 19:16:59.782160   52590 main.go:141] libmachine: (newest-cni-130594)     </interface>
	I0229 19:16:59.782182   52590 main.go:141] libmachine: (newest-cni-130594)     <serial type='pty'>
	I0229 19:16:59.782209   52590 main.go:141] libmachine: (newest-cni-130594)       <target port='0'/>
	I0229 19:16:59.782251   52590 main.go:141] libmachine: (newest-cni-130594)     </serial>
	I0229 19:16:59.782270   52590 main.go:141] libmachine: (newest-cni-130594)     <console type='pty'>
	I0229 19:16:59.782281   52590 main.go:141] libmachine: (newest-cni-130594)       <target type='serial' port='0'/>
	I0229 19:16:59.782289   52590 main.go:141] libmachine: (newest-cni-130594)     </console>
	I0229 19:16:59.782296   52590 main.go:141] libmachine: (newest-cni-130594)     <rng model='virtio'>
	I0229 19:16:59.782306   52590 main.go:141] libmachine: (newest-cni-130594)       <backend model='random'>/dev/random</backend>
	I0229 19:16:59.782318   52590 main.go:141] libmachine: (newest-cni-130594)     </rng>
	I0229 19:16:59.782327   52590 main.go:141] libmachine: (newest-cni-130594)     
	I0229 19:16:59.782334   52590 main.go:141] libmachine: (newest-cni-130594)     
	I0229 19:16:59.782344   52590 main.go:141] libmachine: (newest-cni-130594)   </devices>
	I0229 19:16:59.782355   52590 main.go:141] libmachine: (newest-cni-130594) </domain>
	I0229 19:16:59.782361   52590 main.go:141] libmachine: (newest-cni-130594) 
	I0229 19:16:59.786697   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined MAC address 52:54:00:8d:17:19 in network default
	I0229 19:16:59.787322   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:16:59.787348   52590 main.go:141] libmachine: (newest-cni-130594) Ensuring networks are active...
	I0229 19:16:59.788077   52590 main.go:141] libmachine: (newest-cni-130594) Ensuring network default is active
	I0229 19:16:59.788520   52590 main.go:141] libmachine: (newest-cni-130594) Ensuring network mk-newest-cni-130594 is active
	I0229 19:16:59.789113   52590 main.go:141] libmachine: (newest-cni-130594) Getting domain xml...
	I0229 19:16:59.789804   52590 main.go:141] libmachine: (newest-cni-130594) Creating domain...
	I0229 19:17:01.129183   52590 main.go:141] libmachine: (newest-cni-130594) Waiting to get IP...
	I0229 19:17:01.130118   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:01.130599   52590 main.go:141] libmachine: (newest-cni-130594) DBG | unable to find current IP address of domain newest-cni-130594 in network mk-newest-cni-130594
	I0229 19:17:01.130641   52590 main.go:141] libmachine: (newest-cni-130594) DBG | I0229 19:17:01.130578   52618 retry.go:31] will retry after 303.868776ms: waiting for machine to come up
	I0229 19:17:01.436101   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:01.436703   52590 main.go:141] libmachine: (newest-cni-130594) DBG | unable to find current IP address of domain newest-cni-130594 in network mk-newest-cni-130594
	I0229 19:17:01.436733   52590 main.go:141] libmachine: (newest-cni-130594) DBG | I0229 19:17:01.436654   52618 retry.go:31] will retry after 299.644815ms: waiting for machine to come up
	I0229 19:17:01.738274   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:01.738742   52590 main.go:141] libmachine: (newest-cni-130594) DBG | unable to find current IP address of domain newest-cni-130594 in network mk-newest-cni-130594
	I0229 19:17:01.738769   52590 main.go:141] libmachine: (newest-cni-130594) DBG | I0229 19:17:01.738705   52618 retry.go:31] will retry after 364.815241ms: waiting for machine to come up
	I0229 19:17:02.105155   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:02.105626   52590 main.go:141] libmachine: (newest-cni-130594) DBG | unable to find current IP address of domain newest-cni-130594 in network mk-newest-cni-130594
	I0229 19:17:02.105655   52590 main.go:141] libmachine: (newest-cni-130594) DBG | I0229 19:17:02.105576   52618 retry.go:31] will retry after 484.317766ms: waiting for machine to come up
	I0229 19:17:02.591110   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:02.591531   52590 main.go:141] libmachine: (newest-cni-130594) DBG | unable to find current IP address of domain newest-cni-130594 in network mk-newest-cni-130594
	I0229 19:17:02.591559   52590 main.go:141] libmachine: (newest-cni-130594) DBG | I0229 19:17:02.591477   52618 retry.go:31] will retry after 698.688666ms: waiting for machine to come up
	I0229 19:17:03.291933   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:03.292509   52590 main.go:141] libmachine: (newest-cni-130594) DBG | unable to find current IP address of domain newest-cni-130594 in network mk-newest-cni-130594
	I0229 19:17:03.292537   52590 main.go:141] libmachine: (newest-cni-130594) DBG | I0229 19:17:03.292463   52618 retry.go:31] will retry after 779.864202ms: waiting for machine to come up
	I0229 19:17:04.074373   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:04.074800   52590 main.go:141] libmachine: (newest-cni-130594) DBG | unable to find current IP address of domain newest-cni-130594 in network mk-newest-cni-130594
	I0229 19:17:04.074831   52590 main.go:141] libmachine: (newest-cni-130594) DBG | I0229 19:17:04.074747   52618 retry.go:31] will retry after 946.144699ms: waiting for machine to come up
	I0229 19:17:05.022155   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:05.022628   52590 main.go:141] libmachine: (newest-cni-130594) DBG | unable to find current IP address of domain newest-cni-130594 in network mk-newest-cni-130594
	I0229 19:17:05.022655   52590 main.go:141] libmachine: (newest-cni-130594) DBG | I0229 19:17:05.022575   52618 retry.go:31] will retry after 1.080490095s: waiting for machine to come up
	I0229 19:17:06.104781   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:06.105246   52590 main.go:141] libmachine: (newest-cni-130594) DBG | unable to find current IP address of domain newest-cni-130594 in network mk-newest-cni-130594
	I0229 19:17:06.105269   52590 main.go:141] libmachine: (newest-cni-130594) DBG | I0229 19:17:06.105175   52618 retry.go:31] will retry after 1.547469431s: waiting for machine to come up
	I0229 19:17:07.654746   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:07.655214   52590 main.go:141] libmachine: (newest-cni-130594) DBG | unable to find current IP address of domain newest-cni-130594 in network mk-newest-cni-130594
	I0229 19:17:07.655242   52590 main.go:141] libmachine: (newest-cni-130594) DBG | I0229 19:17:07.655165   52618 retry.go:31] will retry after 1.69867016s: waiting for machine to come up
	I0229 19:17:09.355971   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:09.356493   52590 main.go:141] libmachine: (newest-cni-130594) DBG | unable to find current IP address of domain newest-cni-130594 in network mk-newest-cni-130594
	I0229 19:17:09.356522   52590 main.go:141] libmachine: (newest-cni-130594) DBG | I0229 19:17:09.356445   52618 retry.go:31] will retry after 2.383457338s: waiting for machine to come up
	I0229 19:17:11.741351   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:11.741845   52590 main.go:141] libmachine: (newest-cni-130594) DBG | unable to find current IP address of domain newest-cni-130594 in network mk-newest-cni-130594
	I0229 19:17:11.741867   52590 main.go:141] libmachine: (newest-cni-130594) DBG | I0229 19:17:11.741798   52618 retry.go:31] will retry after 2.907806637s: waiting for machine to come up
	I0229 19:17:14.651011   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:14.651492   52590 main.go:141] libmachine: (newest-cni-130594) DBG | unable to find current IP address of domain newest-cni-130594 in network mk-newest-cni-130594
	I0229 19:17:14.651541   52590 main.go:141] libmachine: (newest-cni-130594) DBG | I0229 19:17:14.651438   52618 retry.go:31] will retry after 3.634634613s: waiting for machine to come up
	I0229 19:17:18.288159   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:18.288691   52590 main.go:141] libmachine: (newest-cni-130594) DBG | unable to find current IP address of domain newest-cni-130594 in network mk-newest-cni-130594
	I0229 19:17:18.288719   52590 main.go:141] libmachine: (newest-cni-130594) DBG | I0229 19:17:18.288626   52618 retry.go:31] will retry after 5.271835381s: waiting for machine to come up
	I0229 19:17:23.564046   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:23.564521   52590 main.go:141] libmachine: (newest-cni-130594) Found IP for machine: 192.168.72.67
	I0229 19:17:23.564542   52590 main.go:141] libmachine: (newest-cni-130594) Reserving static IP address...
	I0229 19:17:23.564572   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has current primary IP address 192.168.72.67 and MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:23.564899   52590 main.go:141] libmachine: (newest-cni-130594) DBG | unable to find host DHCP lease matching {name: "newest-cni-130594", mac: "52:54:00:cd:4c:af", ip: "192.168.72.67"} in network mk-newest-cni-130594
	I0229 19:17:23.639032   52590 main.go:141] libmachine: (newest-cni-130594) DBG | Getting to WaitForSSH function...
	I0229 19:17:23.639059   52590 main.go:141] libmachine: (newest-cni-130594) Reserved static IP address: 192.168.72.67
	I0229 19:17:23.639073   52590 main.go:141] libmachine: (newest-cni-130594) Waiting for SSH to be available...
	I0229 19:17:23.641664   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:23.642087   52590 main.go:141] libmachine: (newest-cni-130594) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:4c:af", ip: ""} in network mk-newest-cni-130594: {Iface:virbr4 ExpiryTime:2024-02-29 20:17:15 +0000 UTC Type:0 Mac:52:54:00:cd:4c:af Iaid: IPaddr:192.168.72.67 Prefix:24 Hostname:minikube Clientid:01:52:54:00:cd:4c:af}
	I0229 19:17:23.642112   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined IP address 192.168.72.67 and MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:23.642267   52590 main.go:141] libmachine: (newest-cni-130594) DBG | Using SSH client type: external
	I0229 19:17:23.642295   52590 main.go:141] libmachine: (newest-cni-130594) DBG | Using SSH private key: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/newest-cni-130594/id_rsa (-rw-------)
	I0229 19:17:23.642322   52590 main.go:141] libmachine: (newest-cni-130594) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.67 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18259-6428/.minikube/machines/newest-cni-130594/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 19:17:23.642344   52590 main.go:141] libmachine: (newest-cni-130594) DBG | About to run SSH command:
	I0229 19:17:23.642359   52590 main.go:141] libmachine: (newest-cni-130594) DBG | exit 0
	I0229 19:17:23.768306   52590 main.go:141] libmachine: (newest-cni-130594) DBG | SSH cmd err, output: <nil>: 
	I0229 19:17:23.768620   52590 main.go:141] libmachine: (newest-cni-130594) KVM machine creation complete!
	I0229 19:17:23.769129   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetConfigRaw
	I0229 19:17:23.769657   52590 main.go:141] libmachine: (newest-cni-130594) Calling .DriverName
	I0229 19:17:23.769832   52590 main.go:141] libmachine: (newest-cni-130594) Calling .DriverName
	I0229 19:17:23.769921   52590 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0229 19:17:23.769932   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetState
	I0229 19:17:23.771299   52590 main.go:141] libmachine: Detecting operating system of created instance...
	I0229 19:17:23.771314   52590 main.go:141] libmachine: Waiting for SSH to be available...
	I0229 19:17:23.771320   52590 main.go:141] libmachine: Getting to WaitForSSH function...
	I0229 19:17:23.771325   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHHostname
	I0229 19:17:23.773705   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:23.774105   52590 main.go:141] libmachine: (newest-cni-130594) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:4c:af", ip: ""} in network mk-newest-cni-130594: {Iface:virbr4 ExpiryTime:2024-02-29 20:17:15 +0000 UTC Type:0 Mac:52:54:00:cd:4c:af Iaid: IPaddr:192.168.72.67 Prefix:24 Hostname:newest-cni-130594 Clientid:01:52:54:00:cd:4c:af}
	I0229 19:17:23.774128   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined IP address 192.168.72.67 and MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:23.774268   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHPort
	I0229 19:17:23.774437   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHKeyPath
	I0229 19:17:23.774609   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHKeyPath
	I0229 19:17:23.774746   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHUsername
	I0229 19:17:23.774892   52590 main.go:141] libmachine: Using SSH client type: native
	I0229 19:17:23.775162   52590 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.67 22 <nil> <nil>}
	I0229 19:17:23.775179   52590 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0229 19:17:23.882603   52590 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 19:17:23.882629   52590 main.go:141] libmachine: Detecting the provisioner...
	I0229 19:17:23.882639   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHHostname
	I0229 19:17:23.885998   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:23.886406   52590 main.go:141] libmachine: (newest-cni-130594) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:4c:af", ip: ""} in network mk-newest-cni-130594: {Iface:virbr4 ExpiryTime:2024-02-29 20:17:15 +0000 UTC Type:0 Mac:52:54:00:cd:4c:af Iaid: IPaddr:192.168.72.67 Prefix:24 Hostname:newest-cni-130594 Clientid:01:52:54:00:cd:4c:af}
	I0229 19:17:23.886438   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined IP address 192.168.72.67 and MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:23.886651   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHPort
	I0229 19:17:23.886854   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHKeyPath
	I0229 19:17:23.887075   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHKeyPath
	I0229 19:17:23.887202   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHUsername
	I0229 19:17:23.887406   52590 main.go:141] libmachine: Using SSH client type: native
	I0229 19:17:23.887637   52590 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.67 22 <nil> <nil>}
	I0229 19:17:23.887662   52590 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0229 19:17:23.996263   52590 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0229 19:17:23.998200   52590 main.go:141] libmachine: found compatible host: buildroot
	I0229 19:17:23.998210   52590 main.go:141] libmachine: Provisioning with buildroot...
	I0229 19:17:23.998219   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetMachineName
	I0229 19:17:23.998480   52590 buildroot.go:166] provisioning hostname "newest-cni-130594"
	I0229 19:17:23.998501   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetMachineName
	I0229 19:17:23.998717   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHHostname
	I0229 19:17:24.001132   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:24.001566   52590 main.go:141] libmachine: (newest-cni-130594) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:4c:af", ip: ""} in network mk-newest-cni-130594: {Iface:virbr4 ExpiryTime:2024-02-29 20:17:15 +0000 UTC Type:0 Mac:52:54:00:cd:4c:af Iaid: IPaddr:192.168.72.67 Prefix:24 Hostname:newest-cni-130594 Clientid:01:52:54:00:cd:4c:af}
	I0229 19:17:24.001591   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined IP address 192.168.72.67 and MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:24.001712   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHPort
	I0229 19:17:24.001897   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHKeyPath
	I0229 19:17:24.002078   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHKeyPath
	I0229 19:17:24.002242   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHUsername
	I0229 19:17:24.002443   52590 main.go:141] libmachine: Using SSH client type: native
	I0229 19:17:24.002641   52590 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.67 22 <nil> <nil>}
	I0229 19:17:24.002656   52590 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-130594 && echo "newest-cni-130594" | sudo tee /etc/hostname
	I0229 19:17:24.121880   52590 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-130594
	
	I0229 19:17:24.121909   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHHostname
	I0229 19:17:24.124625   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:24.124996   52590 main.go:141] libmachine: (newest-cni-130594) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:4c:af", ip: ""} in network mk-newest-cni-130594: {Iface:virbr4 ExpiryTime:2024-02-29 20:17:15 +0000 UTC Type:0 Mac:52:54:00:cd:4c:af Iaid: IPaddr:192.168.72.67 Prefix:24 Hostname:newest-cni-130594 Clientid:01:52:54:00:cd:4c:af}
	I0229 19:17:24.125026   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined IP address 192.168.72.67 and MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:24.125195   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHPort
	I0229 19:17:24.125396   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHKeyPath
	I0229 19:17:24.125566   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHKeyPath
	I0229 19:17:24.125674   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHUsername
	I0229 19:17:24.125851   52590 main.go:141] libmachine: Using SSH client type: native
	I0229 19:17:24.126050   52590 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.67 22 <nil> <nil>}
	I0229 19:17:24.126067   52590 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-130594' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-130594/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-130594' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 19:17:24.246036   52590 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 19:17:24.246072   52590 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18259-6428/.minikube CaCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18259-6428/.minikube}
	I0229 19:17:24.246102   52590 buildroot.go:174] setting up certificates
	I0229 19:17:24.246113   52590 provision.go:83] configureAuth start
	I0229 19:17:24.246129   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetMachineName
	I0229 19:17:24.246397   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetIP
	I0229 19:17:24.249064   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:24.249381   52590 main.go:141] libmachine: (newest-cni-130594) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:4c:af", ip: ""} in network mk-newest-cni-130594: {Iface:virbr4 ExpiryTime:2024-02-29 20:17:15 +0000 UTC Type:0 Mac:52:54:00:cd:4c:af Iaid: IPaddr:192.168.72.67 Prefix:24 Hostname:newest-cni-130594 Clientid:01:52:54:00:cd:4c:af}
	I0229 19:17:24.249420   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined IP address 192.168.72.67 and MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:24.249577   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHHostname
	I0229 19:17:24.251720   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:24.252187   52590 main.go:141] libmachine: (newest-cni-130594) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:4c:af", ip: ""} in network mk-newest-cni-130594: {Iface:virbr4 ExpiryTime:2024-02-29 20:17:15 +0000 UTC Type:0 Mac:52:54:00:cd:4c:af Iaid: IPaddr:192.168.72.67 Prefix:24 Hostname:newest-cni-130594 Clientid:01:52:54:00:cd:4c:af}
	I0229 19:17:24.252214   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined IP address 192.168.72.67 and MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:24.252351   52590 provision.go:138] copyHostCerts
	I0229 19:17:24.252395   52590 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem, removing ...
	I0229 19:17:24.252411   52590 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem
	I0229 19:17:24.252484   52590 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem (1082 bytes)
	I0229 19:17:24.252564   52590 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem, removing ...
	I0229 19:17:24.252572   52590 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem
	I0229 19:17:24.252598   52590 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem (1123 bytes)
	I0229 19:17:24.252646   52590 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem, removing ...
	I0229 19:17:24.252653   52590 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem
	I0229 19:17:24.252674   52590 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem (1675 bytes)
	I0229 19:17:24.252712   52590 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem org=jenkins.newest-cni-130594 san=[192.168.72.67 192.168.72.67 localhost 127.0.0.1 minikube newest-cni-130594]
	I0229 19:17:24.380606   52590 provision.go:172] copyRemoteCerts
	I0229 19:17:24.380658   52590 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 19:17:24.380680   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHHostname
	I0229 19:17:24.383548   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:24.383968   52590 main.go:141] libmachine: (newest-cni-130594) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:4c:af", ip: ""} in network mk-newest-cni-130594: {Iface:virbr4 ExpiryTime:2024-02-29 20:17:15 +0000 UTC Type:0 Mac:52:54:00:cd:4c:af Iaid: IPaddr:192.168.72.67 Prefix:24 Hostname:newest-cni-130594 Clientid:01:52:54:00:cd:4c:af}
	I0229 19:17:24.383994   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined IP address 192.168.72.67 and MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:24.384207   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHPort
	I0229 19:17:24.384405   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHKeyPath
	I0229 19:17:24.384580   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHUsername
	I0229 19:17:24.384722   52590 sshutil.go:53] new ssh client: &{IP:192.168.72.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/newest-cni-130594/id_rsa Username:docker}
	I0229 19:17:24.471294   52590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 19:17:24.500180   52590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0229 19:17:24.528971   52590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 19:17:24.555974   52590 provision.go:86] duration metric: configureAuth took 309.845099ms
	I0229 19:17:24.555999   52590 buildroot.go:189] setting minikube options for container-runtime
	I0229 19:17:24.556219   52590 config.go:182] Loaded profile config "newest-cni-130594": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0229 19:17:24.556409   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHHostname
	I0229 19:17:24.559791   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:24.560242   52590 main.go:141] libmachine: (newest-cni-130594) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:4c:af", ip: ""} in network mk-newest-cni-130594: {Iface:virbr4 ExpiryTime:2024-02-29 20:17:15 +0000 UTC Type:0 Mac:52:54:00:cd:4c:af Iaid: IPaddr:192.168.72.67 Prefix:24 Hostname:newest-cni-130594 Clientid:01:52:54:00:cd:4c:af}
	I0229 19:17:24.560269   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined IP address 192.168.72.67 and MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:24.560440   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHPort
	I0229 19:17:24.560631   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHKeyPath
	I0229 19:17:24.560827   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHKeyPath
	I0229 19:17:24.560978   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHUsername
	I0229 19:17:24.561108   52590 main.go:141] libmachine: Using SSH client type: native
	I0229 19:17:24.561257   52590 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.67 22 <nil> <nil>}
	I0229 19:17:24.561271   52590 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0229 19:17:24.853732   52590 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0229 19:17:24.853768   52590 main.go:141] libmachine: Checking connection to Docker...
	I0229 19:17:24.853782   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetURL
	I0229 19:17:24.855095   52590 main.go:141] libmachine: (newest-cni-130594) DBG | Using libvirt version 6000000
	I0229 19:17:24.857669   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:24.858053   52590 main.go:141] libmachine: (newest-cni-130594) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:4c:af", ip: ""} in network mk-newest-cni-130594: {Iface:virbr4 ExpiryTime:2024-02-29 20:17:15 +0000 UTC Type:0 Mac:52:54:00:cd:4c:af Iaid: IPaddr:192.168.72.67 Prefix:24 Hostname:newest-cni-130594 Clientid:01:52:54:00:cd:4c:af}
	I0229 19:17:24.858080   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined IP address 192.168.72.67 and MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:24.858218   52590 main.go:141] libmachine: Docker is up and running!
	I0229 19:17:24.858233   52590 main.go:141] libmachine: Reticulating splines...
	I0229 19:17:24.858241   52590 client.go:171] LocalClient.Create took 25.765633903s
	I0229 19:17:24.858264   52590 start.go:167] duration metric: libmachine.API.Create for "newest-cni-130594" took 25.765710381s
	I0229 19:17:24.858277   52590 start.go:300] post-start starting for "newest-cni-130594" (driver="kvm2")
	I0229 19:17:24.858305   52590 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 19:17:24.858323   52590 main.go:141] libmachine: (newest-cni-130594) Calling .DriverName
	I0229 19:17:24.858549   52590 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 19:17:24.858575   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHHostname
	I0229 19:17:24.860883   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:24.861178   52590 main.go:141] libmachine: (newest-cni-130594) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:4c:af", ip: ""} in network mk-newest-cni-130594: {Iface:virbr4 ExpiryTime:2024-02-29 20:17:15 +0000 UTC Type:0 Mac:52:54:00:cd:4c:af Iaid: IPaddr:192.168.72.67 Prefix:24 Hostname:newest-cni-130594 Clientid:01:52:54:00:cd:4c:af}
	I0229 19:17:24.861207   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined IP address 192.168.72.67 and MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:24.861311   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHPort
	I0229 19:17:24.861508   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHKeyPath
	I0229 19:17:24.861650   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHUsername
	I0229 19:17:24.861769   52590 sshutil.go:53] new ssh client: &{IP:192.168.72.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/newest-cni-130594/id_rsa Username:docker}
	I0229 19:17:24.947737   52590 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 19:17:24.953000   52590 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 19:17:24.953024   52590 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6428/.minikube/addons for local assets ...
	I0229 19:17:24.953083   52590 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6428/.minikube/files for local assets ...
	I0229 19:17:24.953170   52590 filesync.go:149] local asset: /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem -> 136512.pem in /etc/ssl/certs
	I0229 19:17:24.953307   52590 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 19:17:24.964750   52590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem --> /etc/ssl/certs/136512.pem (1708 bytes)
	I0229 19:17:24.992425   52590 start.go:303] post-start completed in 134.13702ms
	I0229 19:17:24.992470   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetConfigRaw
	I0229 19:17:24.993035   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetIP
	I0229 19:17:24.995610   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:24.996085   52590 main.go:141] libmachine: (newest-cni-130594) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:4c:af", ip: ""} in network mk-newest-cni-130594: {Iface:virbr4 ExpiryTime:2024-02-29 20:17:15 +0000 UTC Type:0 Mac:52:54:00:cd:4c:af Iaid: IPaddr:192.168.72.67 Prefix:24 Hostname:newest-cni-130594 Clientid:01:52:54:00:cd:4c:af}
	I0229 19:17:24.996129   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined IP address 192.168.72.67 and MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:24.996348   52590 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/newest-cni-130594/config.json ...
	I0229 19:17:24.996508   52590 start.go:128] duration metric: createHost completed in 25.922884482s
	I0229 19:17:24.996529   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHHostname
	I0229 19:17:24.998597   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:24.998887   52590 main.go:141] libmachine: (newest-cni-130594) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:4c:af", ip: ""} in network mk-newest-cni-130594: {Iface:virbr4 ExpiryTime:2024-02-29 20:17:15 +0000 UTC Type:0 Mac:52:54:00:cd:4c:af Iaid: IPaddr:192.168.72.67 Prefix:24 Hostname:newest-cni-130594 Clientid:01:52:54:00:cd:4c:af}
	I0229 19:17:24.998913   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined IP address 192.168.72.67 and MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:24.999018   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHPort
	I0229 19:17:24.999198   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHKeyPath
	I0229 19:17:24.999370   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHKeyPath
	I0229 19:17:24.999500   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHUsername
	I0229 19:17:24.999692   52590 main.go:141] libmachine: Using SSH client type: native
	I0229 19:17:24.999891   52590 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.67 22 <nil> <nil>}
	I0229 19:17:24.999903   52590 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 19:17:25.104443   52590 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709234245.065729189
	
	I0229 19:17:25.104464   52590 fix.go:206] guest clock: 1709234245.065729189
	I0229 19:17:25.104471   52590 fix.go:219] Guest: 2024-02-29 19:17:25.065729189 +0000 UTC Remote: 2024-02-29 19:17:24.996518377 +0000 UTC m=+26.051925571 (delta=69.210812ms)
	I0229 19:17:25.104489   52590 fix.go:190] guest clock delta is within tolerance: 69.210812ms
	I0229 19:17:25.104493   52590 start.go:83] releasing machines lock for "newest-cni-130594", held for 26.030952225s
	I0229 19:17:25.104512   52590 main.go:141] libmachine: (newest-cni-130594) Calling .DriverName
	I0229 19:17:25.104764   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetIP
	I0229 19:17:25.107166   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:25.107475   52590 main.go:141] libmachine: (newest-cni-130594) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:4c:af", ip: ""} in network mk-newest-cni-130594: {Iface:virbr4 ExpiryTime:2024-02-29 20:17:15 +0000 UTC Type:0 Mac:52:54:00:cd:4c:af Iaid: IPaddr:192.168.72.67 Prefix:24 Hostname:newest-cni-130594 Clientid:01:52:54:00:cd:4c:af}
	I0229 19:17:25.107495   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined IP address 192.168.72.67 and MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:25.107624   52590 main.go:141] libmachine: (newest-cni-130594) Calling .DriverName
	I0229 19:17:25.108094   52590 main.go:141] libmachine: (newest-cni-130594) Calling .DriverName
	I0229 19:17:25.108247   52590 main.go:141] libmachine: (newest-cni-130594) Calling .DriverName
	I0229 19:17:25.108332   52590 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 19:17:25.108388   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHHostname
	I0229 19:17:25.108429   52590 ssh_runner.go:195] Run: cat /version.json
	I0229 19:17:25.108469   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHHostname
	I0229 19:17:25.111075   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:25.111358   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:25.111507   52590 main.go:141] libmachine: (newest-cni-130594) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:4c:af", ip: ""} in network mk-newest-cni-130594: {Iface:virbr4 ExpiryTime:2024-02-29 20:17:15 +0000 UTC Type:0 Mac:52:54:00:cd:4c:af Iaid: IPaddr:192.168.72.67 Prefix:24 Hostname:newest-cni-130594 Clientid:01:52:54:00:cd:4c:af}
	I0229 19:17:25.111532   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined IP address 192.168.72.67 and MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:25.111668   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHPort
	I0229 19:17:25.111771   52590 main.go:141] libmachine: (newest-cni-130594) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:4c:af", ip: ""} in network mk-newest-cni-130594: {Iface:virbr4 ExpiryTime:2024-02-29 20:17:15 +0000 UTC Type:0 Mac:52:54:00:cd:4c:af Iaid: IPaddr:192.168.72.67 Prefix:24 Hostname:newest-cni-130594 Clientid:01:52:54:00:cd:4c:af}
	I0229 19:17:25.111798   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined IP address 192.168.72.67 and MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:25.111825   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHKeyPath
	I0229 19:17:25.111918   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHPort
	I0229 19:17:25.111992   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHUsername
	I0229 19:17:25.112089   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHKeyPath
	I0229 19:17:25.112116   52590 sshutil.go:53] new ssh client: &{IP:192.168.72.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/newest-cni-130594/id_rsa Username:docker}
	I0229 19:17:25.112223   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHUsername
	I0229 19:17:25.112340   52590 sshutil.go:53] new ssh client: &{IP:192.168.72.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/newest-cni-130594/id_rsa Username:docker}
	I0229 19:17:25.188094   52590 ssh_runner.go:195] Run: systemctl --version
	I0229 19:17:25.215120   52590 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0229 19:17:25.379266   52590 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 19:17:25.386163   52590 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 19:17:25.386239   52590 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 19:17:25.405980   52590 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 19:17:25.405997   52590 start.go:475] detecting cgroup driver to use...
	I0229 19:17:25.406056   52590 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 19:17:25.423701   52590 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 19:17:25.439286   52590 docker.go:217] disabling cri-docker service (if available) ...
	I0229 19:17:25.439343   52590 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 19:17:25.454154   52590 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 19:17:25.472204   52590 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 19:17:25.599774   52590 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 19:17:25.783605   52590 docker.go:233] disabling docker service ...
	I0229 19:17:25.783662   52590 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 19:17:25.800927   52590 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 19:17:25.817576   52590 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 19:17:25.961885   52590 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 19:17:26.091696   52590 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 19:17:26.108268   52590 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 19:17:26.128845   52590 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0229 19:17:26.128913   52590 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 19:17:26.139802   52590 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0229 19:17:26.139851   52590 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 19:17:26.150834   52590 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 19:17:26.161954   52590 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 19:17:26.172783   52590 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 19:17:26.184091   52590 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 19:17:26.194645   52590 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 19:17:26.194691   52590 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0229 19:17:26.209767   52590 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 19:17:26.220316   52590 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 19:17:26.369106   52590 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0229 19:17:26.528922   52590 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0229 19:17:26.529007   52590 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0229 19:17:26.534202   52590 start.go:543] Will wait 60s for crictl version
	I0229 19:17:26.534260   52590 ssh_runner.go:195] Run: which crictl
	I0229 19:17:26.538713   52590 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 19:17:26.577846   52590 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0229 19:17:26.577912   52590 ssh_runner.go:195] Run: crio --version
	I0229 19:17:26.610234   52590 ssh_runner.go:195] Run: crio --version
	I0229 19:17:26.648536   52590 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0229 19:17:26.649909   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetIP
	I0229 19:17:26.652522   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:26.652911   52590 main.go:141] libmachine: (newest-cni-130594) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:4c:af", ip: ""} in network mk-newest-cni-130594: {Iface:virbr4 ExpiryTime:2024-02-29 20:17:15 +0000 UTC Type:0 Mac:52:54:00:cd:4c:af Iaid: IPaddr:192.168.72.67 Prefix:24 Hostname:newest-cni-130594 Clientid:01:52:54:00:cd:4c:af}
	I0229 19:17:26.652938   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined IP address 192.168.72.67 and MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:26.653119   52590 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0229 19:17:26.657984   52590 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 19:17:26.673711   52590 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0229 19:17:26.674983   52590 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0229 19:17:26.675103   52590 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 19:17:26.713136   52590 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0229 19:17:26.713204   52590 ssh_runner.go:195] Run: which lz4
	I0229 19:17:26.718227   52590 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0229 19:17:26.723311   52590 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 19:17:26.723337   52590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (401853962 bytes)
	I0229 19:17:28.411193   52590 crio.go:444] Took 1.692990 seconds to copy over tarball
	I0229 19:17:28.411289   52590 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 19:17:31.061912   52590 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.650595155s)
	I0229 19:17:31.061938   52590 crio.go:451] Took 2.650720 seconds to extract the tarball
	I0229 19:17:31.061949   52590 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 19:17:31.104722   52590 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 19:17:31.154179   52590 crio.go:496] all images are preloaded for cri-o runtime.
	I0229 19:17:31.154200   52590 cache_images.go:84] Images are preloaded, skipping loading
	I0229 19:17:31.154263   52590 ssh_runner.go:195] Run: crio config
	I0229 19:17:31.208130   52590 cni.go:84] Creating CNI manager for ""
	I0229 19:17:31.208151   52590 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 19:17:31.208172   52590 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I0229 19:17:31.208189   52590 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.72.67 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-130594 NodeName:newest-cni-130594 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.67"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs
:map[] NodeIP:192.168.72.67 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 19:17:31.208319   52590 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.67
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-130594"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.67
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.67"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 19:17:31.208386   52590 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-130594 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.67
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-130594 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 19:17:31.208438   52590 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0229 19:17:31.219617   52590 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 19:17:31.219700   52590 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 19:17:31.230716   52590 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (418 bytes)
	I0229 19:17:31.250025   52590 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0229 19:17:31.269406   52590 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2230 bytes)
	I0229 19:17:31.289223   52590 ssh_runner.go:195] Run: grep 192.168.72.67	control-plane.minikube.internal$ /etc/hosts
	I0229 19:17:31.294155   52590 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.67	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 19:17:31.308415   52590 certs.go:56] Setting up /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/newest-cni-130594 for IP: 192.168.72.67
	I0229 19:17:31.308465   52590 certs.go:190] acquiring lock for shared ca certs: {Name:mk3505d2468da66ac20dc3ebf913782cecf1ba0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:17:31.308594   52590 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18259-6428/.minikube/ca.key
	I0229 19:17:31.308644   52590 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.key
	I0229 19:17:31.308682   52590 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/newest-cni-130594/client.key
	I0229 19:17:31.308694   52590 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/newest-cni-130594/client.crt with IP's: []
	I0229 19:17:31.602911   52590 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/newest-cni-130594/client.crt ...
	I0229 19:17:31.602954   52590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/newest-cni-130594/client.crt: {Name:mk84b0372b7eeab5506ba924c29e59fb1d3a98c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:17:31.603165   52590 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/newest-cni-130594/client.key ...
	I0229 19:17:31.603183   52590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/newest-cni-130594/client.key: {Name:mkea0ea101041d6e8d1d0994ce0ee3a3930c1c37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:17:31.603306   52590 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/newest-cni-130594/apiserver.key.1a1e1c5a
	I0229 19:17:31.603325   52590 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/newest-cni-130594/apiserver.crt.1a1e1c5a with IP's: [192.168.72.67 10.96.0.1 127.0.0.1 10.0.0.1]
	I0229 19:17:31.822178   52590 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/newest-cni-130594/apiserver.crt.1a1e1c5a ...
	I0229 19:17:31.822207   52590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/newest-cni-130594/apiserver.crt.1a1e1c5a: {Name:mkf89468c1c794c72942ee93be8239055b42f705 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:17:31.822351   52590 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/newest-cni-130594/apiserver.key.1a1e1c5a ...
	I0229 19:17:31.822367   52590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/newest-cni-130594/apiserver.key.1a1e1c5a: {Name:mkedc3a76aee512e961669f55be5eb52f5cd67a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:17:31.822459   52590 certs.go:337] copying /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/newest-cni-130594/apiserver.crt.1a1e1c5a -> /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/newest-cni-130594/apiserver.crt
	I0229 19:17:31.822559   52590 certs.go:341] copying /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/newest-cni-130594/apiserver.key.1a1e1c5a -> /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/newest-cni-130594/apiserver.key
	I0229 19:17:31.822635   52590 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/newest-cni-130594/proxy-client.key
	I0229 19:17:31.822654   52590 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/newest-cni-130594/proxy-client.crt with IP's: []
	I0229 19:17:32.071836   52590 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/newest-cni-130594/proxy-client.crt ...
	I0229 19:17:32.071865   52590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/newest-cni-130594/proxy-client.crt: {Name:mk88223d09842bb710f0c20c4698f8412e7438e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:17:32.072022   52590 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/newest-cni-130594/proxy-client.key ...
	I0229 19:17:32.072039   52590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/newest-cni-130594/proxy-client.key: {Name:mk1864f6af6a1518f2495e532f7d21e72a6b853b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:17:32.072194   52590 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651.pem (1338 bytes)
	W0229 19:17:32.072235   52590 certs.go:433] ignoring /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651_empty.pem, impossibly tiny 0 bytes
	I0229 19:17:32.072245   52590 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem (1675 bytes)
	I0229 19:17:32.072272   52590 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem (1082 bytes)
	I0229 19:17:32.072298   52590 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem (1123 bytes)
	I0229 19:17:32.072324   52590 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem (1675 bytes)
	I0229 19:17:32.072362   52590 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem (1708 bytes)
	I0229 19:17:32.072907   52590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/newest-cni-130594/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 19:17:32.108594   52590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/newest-cni-130594/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0229 19:17:32.139943   52590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/newest-cni-130594/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 19:17:32.170446   52590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/newest-cni-130594/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 19:17:32.199534   52590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 19:17:32.228261   52590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0229 19:17:32.258155   52590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 19:17:32.286505   52590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0229 19:17:32.315425   52590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem --> /usr/share/ca-certificates/136512.pem (1708 bytes)
	I0229 19:17:32.342585   52590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 19:17:32.370286   52590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651.pem --> /usr/share/ca-certificates/13651.pem (1338 bytes)
	I0229 19:17:32.397758   52590 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 19:17:32.417755   52590 ssh_runner.go:195] Run: openssl version
	I0229 19:17:32.424572   52590 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136512.pem && ln -fs /usr/share/ca-certificates/136512.pem /etc/ssl/certs/136512.pem"
	I0229 19:17:32.437075   52590 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136512.pem
	I0229 19:17:32.444117   52590 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 17:49 /usr/share/ca-certificates/136512.pem
	I0229 19:17:32.444185   52590 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136512.pem
	I0229 19:17:32.451416   52590 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136512.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 19:17:32.464614   52590 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 19:17:32.477240   52590 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 19:17:32.483092   52590 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 17:40 /usr/share/ca-certificates/minikubeCA.pem
	I0229 19:17:32.483153   52590 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 19:17:32.490050   52590 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 19:17:32.503604   52590 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13651.pem && ln -fs /usr/share/ca-certificates/13651.pem /etc/ssl/certs/13651.pem"
	I0229 19:17:32.516764   52590 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13651.pem
	I0229 19:17:32.522614   52590 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 17:49 /usr/share/ca-certificates/13651.pem
	I0229 19:17:32.522669   52590 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13651.pem
	I0229 19:17:32.529410   52590 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13651.pem /etc/ssl/certs/51391683.0"
	I0229 19:17:32.541829   52590 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 19:17:32.547363   52590 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 19:17:32.547416   52590 kubeadm.go:404] StartCluster: {Name:newest-cni-130594 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:newest-cni-130594 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.67 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenk
ins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 19:17:32.547519   52590 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0229 19:17:32.547600   52590 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 19:17:32.595281   52590 cri.go:89] found id: ""
	I0229 19:17:32.595365   52590 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 19:17:32.606230   52590 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 19:17:32.616665   52590 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 19:17:32.627487   52590 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 19:17:32.627534   52590 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0229 19:17:32.810408   52590 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.2
	I0229 19:17:32.810486   52590 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 19:17:33.066979   52590 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 19:17:33.067098   52590 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 19:17:33.067220   52590 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 19:17:33.335672   52590 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 19:17:33.569684   52590 out.go:204]   - Generating certificates and keys ...
	I0229 19:17:33.569829   52590 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 19:17:33.569912   52590 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 19:17:33.570006   52590 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0229 19:17:33.670893   52590 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0229 19:17:33.728390   52590 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0229 19:17:34.089834   52590 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0229 19:17:34.212307   52590 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0229 19:17:34.212514   52590 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-130594] and IPs [192.168.72.67 127.0.0.1 ::1]
	I0229 19:17:34.592656   52590 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0229 19:17:34.592836   52590 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-130594] and IPs [192.168.72.67 127.0.0.1 ::1]
	I0229 19:17:34.713793   52590 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0229 19:17:34.898402   52590 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0229 19:17:35.116855   52590 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0229 19:17:35.117167   52590 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 19:17:35.191797   52590 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 19:17:35.554099   52590 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0229 19:17:35.718092   52590 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 19:17:35.845917   52590 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 19:17:35.945043   52590 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 19:17:35.945887   52590 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 19:17:35.953127   52590 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	
	
	==> CRI-O <==
	Feb 29 19:17:36 no-preload-247197 crio[684]: time="2024-02-29 19:17:36.827217606Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709234256827189600,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=569f6929-24a3-406e-aa04-1028d07dc250 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 19:17:36 no-preload-247197 crio[684]: time="2024-02-29 19:17:36.827989522Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3ebd7bb0-7d22-44f8-af10-4c00e6b8692d name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:17:36 no-preload-247197 crio[684]: time="2024-02-29 19:17:36.828087527Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3ebd7bb0-7d22-44f8-af10-4c00e6b8692d name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:17:36 no-preload-247197 crio[684]: time="2024-02-29 19:17:36.828369706Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c77d304aa104bed3e19645c4702145c26a9b0c8580d21f085e5eac06fa73ca2c,PodSandboxId:a493ebfe62c8ec01fd4c76ae3fb789ffae4c37ddb97b686119fe01ea3abff20c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709233462425719553,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c361786-e6d8-4cb4-81c3-387677a3bb05,},Annotations:map[string]string{io.kubernetes.container.hash: 9d9afd6b,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8cab5559badab1abf69f6d38522d0231303c529f2b78b96d073ce6f1117da43,PodSandboxId:2dc918253156be554da561f824424ad09d8e0af9ceca3d16f4bcbd4eef557e3f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1709233461223310872,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-9z6k5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 818ddb56-c41b-4aae-8490-a9559498eecb,},Annotations:map[string]string{io.kubernetes.container.hash: 96f4e418,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecdd7783c174633c8e5cbf63bbf2bd9a803b88800ec269f503d1974990869365,PodSandboxId:d6298b9e924d66a97ceffdbba8111e7432bc19f85d1e0f63841dd025b8138247,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1709233460468477288,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vvkjv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b9
11d8-c127-4008-a279-5f1cac593457,},Annotations:map[string]string{io.kubernetes.container.hash: d5fdfa47,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e058bfecc2b86d04151528b3bc3ea0c29858a4d4c2af94ff5e8cea26dd9438c,PodSandboxId:a207c918f69f118f2237a099f7128018173e85ca31b1243aeb453f9e33f6faf5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1709233440913104037,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-247197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06bfef3935db5118eb5773929f3f215a,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 3da47a01,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:730a369e2636f53e3085f418ca1fe59278c5988e5639018d26088db3e0d29a9a,PodSandboxId:ede772d0d0419d604b23eee81ea143a69419ae9e3445644669e8bf9a9df81475,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1709233440845385957,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-247197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c3cecb6396afec4d5aed6c036a4ee58,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 28b9db08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c68222b7809e6dde0a01964d3dcac0dd1789c074f0b42457a8d52ddf09a616a,PodSandboxId:4a8a310e4612bbff70cf054794a7d34412df456d78d43db98e402e609e1c005f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1709233440878406917,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-247197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48a15f32acd3e29de98b06818f25b3f6,},Annotations:map[string]string{io.k
ubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9661e52ccd784fecb3c7d46bee0dd7f089a63bf8729d5990840d79192dc01b35,PodSandboxId:7f4d2f592e7698bb1b2a38ee674726d145456f279c1be1a52ac173b815632f16,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1709233440881361861,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-247197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d9525e57c83e7fe4adc55cd306f5f1c,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6edf3acff7dee1fec0aa396e614c8d822a2a7e2074e2da8863c3427cda994799,PodSandboxId:d774f6e634f56d9f18ca89a03bbf39a8a32a9b55037fd9e100b52ea2c8eab545,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_EXITED,CreatedAt:1709233146339067379,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-247197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c3cecb6396afec4d5aed6c036a4ee58,},Annotations:map[string]string{io.k
ubernetes.container.hash: 28b9db08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3ebd7bb0-7d22-44f8-af10-4c00e6b8692d name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:17:36 no-preload-247197 crio[684]: time="2024-02-29 19:17:36.868227392Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dc23d2c9-5fe3-46b7-8e80-1237d9189eb9 name=/runtime.v1.RuntimeService/Version
	Feb 29 19:17:36 no-preload-247197 crio[684]: time="2024-02-29 19:17:36.868335333Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dc23d2c9-5fe3-46b7-8e80-1237d9189eb9 name=/runtime.v1.RuntimeService/Version
	Feb 29 19:17:36 no-preload-247197 crio[684]: time="2024-02-29 19:17:36.870372562Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5afe5b71-d651-4afa-83e9-b09c6ed047a8 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 19:17:36 no-preload-247197 crio[684]: time="2024-02-29 19:17:36.870723698Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709234256870699377,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5afe5b71-d651-4afa-83e9-b09c6ed047a8 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 19:17:36 no-preload-247197 crio[684]: time="2024-02-29 19:17:36.871436012Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7cbdcb48-9339-4b0e-b501-e2f6512ea056 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:17:36 no-preload-247197 crio[684]: time="2024-02-29 19:17:36.871490843Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7cbdcb48-9339-4b0e-b501-e2f6512ea056 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:17:36 no-preload-247197 crio[684]: time="2024-02-29 19:17:36.871661409Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c77d304aa104bed3e19645c4702145c26a9b0c8580d21f085e5eac06fa73ca2c,PodSandboxId:a493ebfe62c8ec01fd4c76ae3fb789ffae4c37ddb97b686119fe01ea3abff20c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709233462425719553,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c361786-e6d8-4cb4-81c3-387677a3bb05,},Annotations:map[string]string{io.kubernetes.container.hash: 9d9afd6b,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8cab5559badab1abf69f6d38522d0231303c529f2b78b96d073ce6f1117da43,PodSandboxId:2dc918253156be554da561f824424ad09d8e0af9ceca3d16f4bcbd4eef557e3f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1709233461223310872,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-9z6k5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 818ddb56-c41b-4aae-8490-a9559498eecb,},Annotations:map[string]string{io.kubernetes.container.hash: 96f4e418,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecdd7783c174633c8e5cbf63bbf2bd9a803b88800ec269f503d1974990869365,PodSandboxId:d6298b9e924d66a97ceffdbba8111e7432bc19f85d1e0f63841dd025b8138247,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1709233460468477288,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vvkjv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b9
11d8-c127-4008-a279-5f1cac593457,},Annotations:map[string]string{io.kubernetes.container.hash: d5fdfa47,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e058bfecc2b86d04151528b3bc3ea0c29858a4d4c2af94ff5e8cea26dd9438c,PodSandboxId:a207c918f69f118f2237a099f7128018173e85ca31b1243aeb453f9e33f6faf5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1709233440913104037,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-247197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06bfef3935db5118eb5773929f3f215a,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 3da47a01,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:730a369e2636f53e3085f418ca1fe59278c5988e5639018d26088db3e0d29a9a,PodSandboxId:ede772d0d0419d604b23eee81ea143a69419ae9e3445644669e8bf9a9df81475,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1709233440845385957,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-247197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c3cecb6396afec4d5aed6c036a4ee58,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 28b9db08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c68222b7809e6dde0a01964d3dcac0dd1789c074f0b42457a8d52ddf09a616a,PodSandboxId:4a8a310e4612bbff70cf054794a7d34412df456d78d43db98e402e609e1c005f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1709233440878406917,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-247197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48a15f32acd3e29de98b06818f25b3f6,},Annotations:map[string]string{io.k
ubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9661e52ccd784fecb3c7d46bee0dd7f089a63bf8729d5990840d79192dc01b35,PodSandboxId:7f4d2f592e7698bb1b2a38ee674726d145456f279c1be1a52ac173b815632f16,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1709233440881361861,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-247197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d9525e57c83e7fe4adc55cd306f5f1c,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6edf3acff7dee1fec0aa396e614c8d822a2a7e2074e2da8863c3427cda994799,PodSandboxId:d774f6e634f56d9f18ca89a03bbf39a8a32a9b55037fd9e100b52ea2c8eab545,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_EXITED,CreatedAt:1709233146339067379,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-247197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c3cecb6396afec4d5aed6c036a4ee58,},Annotations:map[string]string{io.k
ubernetes.container.hash: 28b9db08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7cbdcb48-9339-4b0e-b501-e2f6512ea056 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:17:36 no-preload-247197 crio[684]: time="2024-02-29 19:17:36.927844851Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=356a0a17-c287-4bd4-a59e-745f6b874978 name=/runtime.v1.RuntimeService/Version
	Feb 29 19:17:36 no-preload-247197 crio[684]: time="2024-02-29 19:17:36.927961547Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=356a0a17-c287-4bd4-a59e-745f6b874978 name=/runtime.v1.RuntimeService/Version
	Feb 29 19:17:36 no-preload-247197 crio[684]: time="2024-02-29 19:17:36.929345234Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d4d83bcd-43da-40f9-b949-100f4f3b728b name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 19:17:36 no-preload-247197 crio[684]: time="2024-02-29 19:17:36.929774969Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709234256929745906,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d4d83bcd-43da-40f9-b949-100f4f3b728b name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 19:17:36 no-preload-247197 crio[684]: time="2024-02-29 19:17:36.930526929Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0e38de71-e01f-44fe-8e22-4c738026606b name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:17:36 no-preload-247197 crio[684]: time="2024-02-29 19:17:36.930624486Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0e38de71-e01f-44fe-8e22-4c738026606b name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:17:36 no-preload-247197 crio[684]: time="2024-02-29 19:17:36.930851306Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c77d304aa104bed3e19645c4702145c26a9b0c8580d21f085e5eac06fa73ca2c,PodSandboxId:a493ebfe62c8ec01fd4c76ae3fb789ffae4c37ddb97b686119fe01ea3abff20c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709233462425719553,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c361786-e6d8-4cb4-81c3-387677a3bb05,},Annotations:map[string]string{io.kubernetes.container.hash: 9d9afd6b,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8cab5559badab1abf69f6d38522d0231303c529f2b78b96d073ce6f1117da43,PodSandboxId:2dc918253156be554da561f824424ad09d8e0af9ceca3d16f4bcbd4eef557e3f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1709233461223310872,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-9z6k5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 818ddb56-c41b-4aae-8490-a9559498eecb,},Annotations:map[string]string{io.kubernetes.container.hash: 96f4e418,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecdd7783c174633c8e5cbf63bbf2bd9a803b88800ec269f503d1974990869365,PodSandboxId:d6298b9e924d66a97ceffdbba8111e7432bc19f85d1e0f63841dd025b8138247,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1709233460468477288,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vvkjv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b9
11d8-c127-4008-a279-5f1cac593457,},Annotations:map[string]string{io.kubernetes.container.hash: d5fdfa47,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e058bfecc2b86d04151528b3bc3ea0c29858a4d4c2af94ff5e8cea26dd9438c,PodSandboxId:a207c918f69f118f2237a099f7128018173e85ca31b1243aeb453f9e33f6faf5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1709233440913104037,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-247197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06bfef3935db5118eb5773929f3f215a,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 3da47a01,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:730a369e2636f53e3085f418ca1fe59278c5988e5639018d26088db3e0d29a9a,PodSandboxId:ede772d0d0419d604b23eee81ea143a69419ae9e3445644669e8bf9a9df81475,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1709233440845385957,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-247197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c3cecb6396afec4d5aed6c036a4ee58,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 28b9db08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c68222b7809e6dde0a01964d3dcac0dd1789c074f0b42457a8d52ddf09a616a,PodSandboxId:4a8a310e4612bbff70cf054794a7d34412df456d78d43db98e402e609e1c005f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1709233440878406917,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-247197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48a15f32acd3e29de98b06818f25b3f6,},Annotations:map[string]string{io.k
ubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9661e52ccd784fecb3c7d46bee0dd7f089a63bf8729d5990840d79192dc01b35,PodSandboxId:7f4d2f592e7698bb1b2a38ee674726d145456f279c1be1a52ac173b815632f16,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1709233440881361861,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-247197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d9525e57c83e7fe4adc55cd306f5f1c,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6edf3acff7dee1fec0aa396e614c8d822a2a7e2074e2da8863c3427cda994799,PodSandboxId:d774f6e634f56d9f18ca89a03bbf39a8a32a9b55037fd9e100b52ea2c8eab545,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_EXITED,CreatedAt:1709233146339067379,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-247197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c3cecb6396afec4d5aed6c036a4ee58,},Annotations:map[string]string{io.k
ubernetes.container.hash: 28b9db08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0e38de71-e01f-44fe-8e22-4c738026606b name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:17:36 no-preload-247197 crio[684]: time="2024-02-29 19:17:36.973509805Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=37594f32-81b5-4f2b-a6f8-09952cd992ec name=/runtime.v1.RuntimeService/Version
	Feb 29 19:17:36 no-preload-247197 crio[684]: time="2024-02-29 19:17:36.973580100Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=37594f32-81b5-4f2b-a6f8-09952cd992ec name=/runtime.v1.RuntimeService/Version
	Feb 29 19:17:36 no-preload-247197 crio[684]: time="2024-02-29 19:17:36.974854355Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3f9d26bd-6dea-438f-8891-fb0ac962c1c2 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 19:17:36 no-preload-247197 crio[684]: time="2024-02-29 19:17:36.975296683Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709234256975274728,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3f9d26bd-6dea-438f-8891-fb0ac962c1c2 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 19:17:36 no-preload-247197 crio[684]: time="2024-02-29 19:17:36.976080552Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b3d22d2e-c4b6-4e2f-a3da-9e0b516f612d name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:17:36 no-preload-247197 crio[684]: time="2024-02-29 19:17:36.976218907Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b3d22d2e-c4b6-4e2f-a3da-9e0b516f612d name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:17:36 no-preload-247197 crio[684]: time="2024-02-29 19:17:36.976413353Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c77d304aa104bed3e19645c4702145c26a9b0c8580d21f085e5eac06fa73ca2c,PodSandboxId:a493ebfe62c8ec01fd4c76ae3fb789ffae4c37ddb97b686119fe01ea3abff20c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709233462425719553,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c361786-e6d8-4cb4-81c3-387677a3bb05,},Annotations:map[string]string{io.kubernetes.container.hash: 9d9afd6b,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8cab5559badab1abf69f6d38522d0231303c529f2b78b96d073ce6f1117da43,PodSandboxId:2dc918253156be554da561f824424ad09d8e0af9ceca3d16f4bcbd4eef557e3f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1709233461223310872,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-9z6k5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 818ddb56-c41b-4aae-8490-a9559498eecb,},Annotations:map[string]string{io.kubernetes.container.hash: 96f4e418,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecdd7783c174633c8e5cbf63bbf2bd9a803b88800ec269f503d1974990869365,PodSandboxId:d6298b9e924d66a97ceffdbba8111e7432bc19f85d1e0f63841dd025b8138247,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1709233460468477288,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vvkjv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b9
11d8-c127-4008-a279-5f1cac593457,},Annotations:map[string]string{io.kubernetes.container.hash: d5fdfa47,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e058bfecc2b86d04151528b3bc3ea0c29858a4d4c2af94ff5e8cea26dd9438c,PodSandboxId:a207c918f69f118f2237a099f7128018173e85ca31b1243aeb453f9e33f6faf5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1709233440913104037,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-247197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06bfef3935db5118eb5773929f3f215a,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 3da47a01,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:730a369e2636f53e3085f418ca1fe59278c5988e5639018d26088db3e0d29a9a,PodSandboxId:ede772d0d0419d604b23eee81ea143a69419ae9e3445644669e8bf9a9df81475,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1709233440845385957,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-247197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c3cecb6396afec4d5aed6c036a4ee58,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 28b9db08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c68222b7809e6dde0a01964d3dcac0dd1789c074f0b42457a8d52ddf09a616a,PodSandboxId:4a8a310e4612bbff70cf054794a7d34412df456d78d43db98e402e609e1c005f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1709233440878406917,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-247197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48a15f32acd3e29de98b06818f25b3f6,},Annotations:map[string]string{io.k
ubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9661e52ccd784fecb3c7d46bee0dd7f089a63bf8729d5990840d79192dc01b35,PodSandboxId:7f4d2f592e7698bb1b2a38ee674726d145456f279c1be1a52ac173b815632f16,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1709233440881361861,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-247197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d9525e57c83e7fe4adc55cd306f5f1c,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6edf3acff7dee1fec0aa396e614c8d822a2a7e2074e2da8863c3427cda994799,PodSandboxId:d774f6e634f56d9f18ca89a03bbf39a8a32a9b55037fd9e100b52ea2c8eab545,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_EXITED,CreatedAt:1709233146339067379,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-247197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c3cecb6396afec4d5aed6c036a4ee58,},Annotations:map[string]string{io.k
ubernetes.container.hash: 28b9db08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b3d22d2e-c4b6-4e2f-a3da-9e0b516f612d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c77d304aa104b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 minutes ago      Running             storage-provisioner       0                   a493ebfe62c8e       storage-provisioner
	d8cab5559bada       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   13 minutes ago      Running             coredns                   0                   2dc918253156b       coredns-76f75df574-9z6k5
	ecdd7783c1746       cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834   13 minutes ago      Running             kube-proxy                0                   d6298b9e924d6       kube-proxy-vvkjv
	3e058bfecc2b8       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7   13 minutes ago      Running             etcd                      2                   a207c918f69f1       etcd-no-preload-247197
	9661e52ccd784       d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d   13 minutes ago      Running             kube-controller-manager   2                   7f4d2f592e769       kube-controller-manager-no-preload-247197
	2c68222b7809e       4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210   13 minutes ago      Running             kube-scheduler            2                   4a8a310e4612b       kube-scheduler-no-preload-247197
	730a369e2636f       bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f   13 minutes ago      Running             kube-apiserver            2                   ede772d0d0419       kube-apiserver-no-preload-247197
	6edf3acff7dee       bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f   18 minutes ago      Exited              kube-apiserver            1                   d774f6e634f56       kube-apiserver-no-preload-247197
	
	
	==> coredns [d8cab5559badab1abf69f6d38522d0231303c529f2b78b96d073ce6f1117da43] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	[INFO] Reloading complete
	[INFO] 127.0.0.1:59397 - 54629 "HINFO IN 2086180611971448474.6684178754634217295. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023047441s
	
	
	==> describe nodes <==
	Name:               no-preload-247197
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-247197
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f9e09574fdbf0719156a1e892f7aeb8b71f0cf19
	                    minikube.k8s.io/name=no-preload-247197
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_29T19_04_07_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Feb 2024 19:04:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-247197
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Feb 2024 19:17:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Feb 2024 19:14:41 +0000   Thu, 29 Feb 2024 19:04:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Feb 2024 19:14:41 +0000   Thu, 29 Feb 2024 19:04:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Feb 2024 19:14:41 +0000   Thu, 29 Feb 2024 19:04:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Feb 2024 19:14:41 +0000   Thu, 29 Feb 2024 19:04:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.72
	  Hostname:    no-preload-247197
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 e2650de0b91e48329c17e27b361311ab
	  System UUID:                e2650de0-b91e-4832-9c17-e27b361311ab
	  Boot ID:                    ffdc0861-0276-4e84-a23a-5d1542d1375a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-9z6k5                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-no-preload-247197                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kube-apiserver-no-preload-247197             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-no-preload-247197    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-vvkjv                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-no-preload-247197             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 metrics-server-57f55c9bc5-nj5h7              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node no-preload-247197 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node no-preload-247197 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node no-preload-247197 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m                kubelet          Node no-preload-247197 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m                kubelet          Node no-preload-247197 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m                kubelet          Node no-preload-247197 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m                node-controller  Node no-preload-247197 event: Registered Node no-preload-247197 in Controller
	
	
	==> dmesg <==
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.060403] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.046617] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.796155] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.350287] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.714204] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +11.067185] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.059521] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067629] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.198604] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.116044] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.251607] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[ +17.623821] kauditd_printk_skb: 130 callbacks suppressed
	[Feb29 18:59] systemd-fstab-generator[1299]: Ignoring "noauto" option for root device
	[  +5.774117] kauditd_printk_skb: 63 callbacks suppressed
	[  +6.723278] kauditd_printk_skb: 69 callbacks suppressed
	[Feb29 19:03] kauditd_printk_skb: 5 callbacks suppressed
	[  +1.332065] systemd-fstab-generator[3764]: Ignoring "noauto" option for root device
	[Feb29 19:04] kauditd_printk_skb: 54 callbacks suppressed
	[  +2.807135] systemd-fstab-generator[4088]: Ignoring "noauto" option for root device
	[ +13.223909] kauditd_printk_skb: 14 callbacks suppressed
	[  +6.530656] kauditd_printk_skb: 56 callbacks suppressed
	
	
	==> etcd [3e058bfecc2b86d04151528b3bc3ea0c29858a4d4c2af94ff5e8cea26dd9438c] <==
	{"level":"info","ts":"2024-02-29T19:04:02.179667Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"87349ef525ad2fc2 became candidate at term 2"}
	{"level":"info","ts":"2024-02-29T19:04:02.179672Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"87349ef525ad2fc2 received MsgVoteResp from 87349ef525ad2fc2 at term 2"}
	{"level":"info","ts":"2024-02-29T19:04:02.17969Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"87349ef525ad2fc2 became leader at term 2"}
	{"level":"info","ts":"2024-02-29T19:04:02.179702Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 87349ef525ad2fc2 elected leader 87349ef525ad2fc2 at term 2"}
	{"level":"info","ts":"2024-02-29T19:04:02.181422Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"87349ef525ad2fc2","local-member-attributes":"{Name:no-preload-247197 ClientURLs:[https://192.168.50.72:2379]}","request-path":"/0/members/87349ef525ad2fc2/attributes","cluster-id":"cf1dc574e5b9e532","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-29T19:04:02.181582Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T19:04:02.181969Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T19:04:02.182191Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T19:04:02.182498Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-29T19:04:02.182543Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-29T19:04:02.185213Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.72:2379"}
	{"level":"info","ts":"2024-02-29T19:04:02.185322Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cf1dc574e5b9e532","local-member-id":"87349ef525ad2fc2","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T19:04:02.185432Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T19:04:02.18548Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T19:04:02.195449Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-29T19:14:02.246662Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":724}
	{"level":"info","ts":"2024-02-29T19:14:02.250331Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":724,"took":"3.244583ms","hash":1858765987}
	{"level":"info","ts":"2024-02-29T19:14:02.25039Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1858765987,"revision":724,"compact-revision":-1}
	{"level":"warn","ts":"2024-02-29T19:17:15.934824Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.754741ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3441469154048742380 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.50.72\" mod_revision:1118 > success:<request_put:<key:\"/registry/masterleases/192.168.50.72\" value_size:66 lease:3441469154048742378 >> failure:<request_range:<key:\"/registry/masterleases/192.168.50.72\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-02-29T19:17:15.935057Z","caller":"traceutil/trace.go:171","msg":"trace[584242533] linearizableReadLoop","detail":"{readStateIndex:1302; appliedIndex:1301; }","duration":"142.234317ms","start":"2024-02-29T19:17:15.792782Z","end":"2024-02-29T19:17:15.935017Z","steps":["trace[584242533] 'read index received'  (duration: 12.262964ms)","trace[584242533] 'applied index is now lower than readState.Index'  (duration: 129.969867ms)"],"step_count":2}
	{"level":"info","ts":"2024-02-29T19:17:15.935326Z","caller":"traceutil/trace.go:171","msg":"trace[935966519] transaction","detail":"{read_only:false; response_revision:1126; number_of_response:1; }","duration":"194.817161ms","start":"2024-02-29T19:17:15.740473Z","end":"2024-02-29T19:17:15.935291Z","steps":["trace[935966519] 'process raft request'  (duration: 64.606609ms)","trace[935966519] 'compare'  (duration: 128.573144ms)"],"step_count":2}
	{"level":"warn","ts":"2024-02-29T19:17:15.935811Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"143.095278ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-02-29T19:17:15.935882Z","caller":"traceutil/trace.go:171","msg":"trace[1227938759] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1126; }","duration":"143.206735ms","start":"2024-02-29T19:17:15.792662Z","end":"2024-02-29T19:17:15.935869Z","steps":["trace[1227938759] 'agreement among raft nodes before linearized reading'  (duration: 142.428456ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-29T19:17:33.414922Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.032576ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-02-29T19:17:33.415731Z","caller":"traceutil/trace.go:171","msg":"trace[395612955] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1139; }","duration":"104.779878ms","start":"2024-02-29T19:17:33.310844Z","end":"2024-02-29T19:17:33.415624Z","steps":["trace[395612955] 'range keys from in-memory index tree'  (duration: 103.966962ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:17:37 up 19 min,  0 users,  load average: 0.17, 0.15, 0.10
	Linux no-preload-247197 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [6edf3acff7dee1fec0aa396e614c8d822a2a7e2074e2da8863c3427cda994799] <==
	W0229 19:03:52.896347       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 19:03:53.110732       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 19:03:53.194046       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 19:03:53.299251       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 19:03:53.369940       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 19:03:53.471046       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 19:03:53.492031       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 19:03:53.564233       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 19:03:53.564291       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 19:03:53.586961       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 19:03:53.586980       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 19:03:53.671720       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 19:03:53.715114       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 19:03:53.727804       1 logging.go:59] [core] [Channel #15 SubChannel #16] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 19:03:53.842808       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 19:03:53.879601       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 19:03:54.085842       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 19:03:54.112639       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 19:03:54.169809       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 19:03:54.242260       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 19:03:54.347342       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 19:03:54.658736       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 19:03:54.777476       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 19:03:54.838282       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 19:03:54.877257       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [730a369e2636f53e3085f418ca1fe59278c5988e5639018d26088db3e0d29a9a] <==
	I0229 19:12:04.770297       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0229 19:14:03.771985       1 handler_proxy.go:93] no RequestInfo found in the context
	E0229 19:14:03.772421       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0229 19:14:04.772749       1 handler_proxy.go:93] no RequestInfo found in the context
	E0229 19:14:04.772844       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0229 19:14:04.772868       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0229 19:14:04.772975       1 handler_proxy.go:93] no RequestInfo found in the context
	E0229 19:14:04.773386       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0229 19:14:04.774723       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0229 19:15:04.773537       1 handler_proxy.go:93] no RequestInfo found in the context
	E0229 19:15:04.773649       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0229 19:15:04.773713       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0229 19:15:04.775928       1 handler_proxy.go:93] no RequestInfo found in the context
	E0229 19:15:04.776255       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0229 19:15:04.776310       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0229 19:17:04.774352       1 handler_proxy.go:93] no RequestInfo found in the context
	E0229 19:17:04.774466       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0229 19:17:04.774481       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0229 19:17:04.776640       1 handler_proxy.go:93] no RequestInfo found in the context
	E0229 19:17:04.776824       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0229 19:17:04.776860       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [9661e52ccd784fecb3c7d46bee0dd7f089a63bf8729d5990840d79192dc01b35] <==
	I0229 19:11:49.471424       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 19:12:19.014390       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 19:12:19.481285       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 19:12:49.019050       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 19:12:49.491018       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 19:13:19.024592       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 19:13:19.500079       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 19:13:49.031796       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 19:13:49.510041       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 19:14:19.038096       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 19:14:19.519820       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 19:14:49.043076       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 19:14:49.528370       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 19:15:19.048674       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 19:15:19.536033       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0229 19:15:23.449574       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="211.713µs"
	I0229 19:15:35.448957       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="135.457µs"
	E0229 19:15:49.054455       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 19:15:49.544207       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 19:16:19.060796       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 19:16:19.554615       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 19:16:49.065856       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 19:16:49.564290       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 19:17:19.072981       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 19:17:19.573587       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [ecdd7783c174633c8e5cbf63bbf2bd9a803b88800ec269f503d1974990869365] <==
	I0229 19:04:21.239570       1 server_others.go:72] "Using iptables proxy"
	I0229 19:04:21.256607       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.50.72"]
	I0229 19:04:21.392764       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0229 19:04:21.392815       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0229 19:04:21.392832       1 server_others.go:168] "Using iptables Proxier"
	I0229 19:04:21.406319       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0229 19:04:21.406553       1 server.go:865] "Version info" version="v1.29.0-rc.2"
	I0229 19:04:21.406565       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0229 19:04:21.423377       1 config.go:188] "Starting service config controller"
	I0229 19:04:21.423652       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0229 19:04:21.425739       1 config.go:97] "Starting endpoint slice config controller"
	I0229 19:04:21.425852       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0229 19:04:21.431824       1 config.go:315] "Starting node config controller"
	I0229 19:04:21.431959       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0229 19:04:21.525471       1 shared_informer.go:318] Caches are synced for service config
	I0229 19:04:21.526229       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0229 19:04:21.533687       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [2c68222b7809e6dde0a01964d3dcac0dd1789c074f0b42457a8d52ddf09a616a] <==
	W0229 19:04:04.815387       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0229 19:04:04.815446       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0229 19:04:04.888916       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0229 19:04:04.889070       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0229 19:04:04.902473       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0229 19:04:04.902814       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0229 19:04:04.977866       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0229 19:04:04.978003       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0229 19:04:05.049108       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0229 19:04:05.049391       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0229 19:04:05.073903       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0229 19:04:05.074018       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0229 19:04:05.080733       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0229 19:04:05.080814       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0229 19:04:05.103662       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0229 19:04:05.103809       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0229 19:04:05.160627       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0229 19:04:05.160881       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0229 19:04:05.190981       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0229 19:04:05.191050       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0229 19:04:05.194894       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0229 19:04:05.194948       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0229 19:04:05.216449       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0229 19:04:05.216506       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0229 19:04:06.878350       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 29 19:15:09 no-preload-247197 kubelet[4095]: E0229 19:15:09.443086    4095 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Feb 29 19:15:09 no-preload-247197 kubelet[4095]: E0229 19:15:09.443244    4095 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Feb 29 19:15:09 no-preload-247197 kubelet[4095]: E0229 19:15:09.443609    4095 kuberuntime_manager.go:1262] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-bzmmr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Pro
beHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:Fi
le,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-nj5h7_kube-system(c53f2987-829e-4bea-8075-16af3a59249f): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Feb 29 19:15:09 no-preload-247197 kubelet[4095]: E0229 19:15:09.443648    4095 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-nj5h7" podUID="c53f2987-829e-4bea-8075-16af3a59249f"
	Feb 29 19:15:23 no-preload-247197 kubelet[4095]: E0229 19:15:23.429408    4095 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nj5h7" podUID="c53f2987-829e-4bea-8075-16af3a59249f"
	Feb 29 19:15:35 no-preload-247197 kubelet[4095]: E0229 19:15:35.429752    4095 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nj5h7" podUID="c53f2987-829e-4bea-8075-16af3a59249f"
	Feb 29 19:15:49 no-preload-247197 kubelet[4095]: E0229 19:15:49.428952    4095 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nj5h7" podUID="c53f2987-829e-4bea-8075-16af3a59249f"
	Feb 29 19:16:02 no-preload-247197 kubelet[4095]: E0229 19:16:02.429726    4095 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nj5h7" podUID="c53f2987-829e-4bea-8075-16af3a59249f"
	Feb 29 19:16:07 no-preload-247197 kubelet[4095]: E0229 19:16:07.511956    4095 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 29 19:16:07 no-preload-247197 kubelet[4095]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 29 19:16:07 no-preload-247197 kubelet[4095]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 29 19:16:07 no-preload-247197 kubelet[4095]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 29 19:16:07 no-preload-247197 kubelet[4095]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 29 19:16:13 no-preload-247197 kubelet[4095]: E0229 19:16:13.432787    4095 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nj5h7" podUID="c53f2987-829e-4bea-8075-16af3a59249f"
	Feb 29 19:16:27 no-preload-247197 kubelet[4095]: E0229 19:16:27.429684    4095 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nj5h7" podUID="c53f2987-829e-4bea-8075-16af3a59249f"
	Feb 29 19:16:40 no-preload-247197 kubelet[4095]: E0229 19:16:40.428907    4095 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nj5h7" podUID="c53f2987-829e-4bea-8075-16af3a59249f"
	Feb 29 19:16:53 no-preload-247197 kubelet[4095]: E0229 19:16:53.429200    4095 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nj5h7" podUID="c53f2987-829e-4bea-8075-16af3a59249f"
	Feb 29 19:17:04 no-preload-247197 kubelet[4095]: E0229 19:17:04.429015    4095 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nj5h7" podUID="c53f2987-829e-4bea-8075-16af3a59249f"
	Feb 29 19:17:07 no-preload-247197 kubelet[4095]: E0229 19:17:07.512490    4095 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 29 19:17:07 no-preload-247197 kubelet[4095]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 29 19:17:07 no-preload-247197 kubelet[4095]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 29 19:17:07 no-preload-247197 kubelet[4095]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 29 19:17:07 no-preload-247197 kubelet[4095]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 29 19:17:17 no-preload-247197 kubelet[4095]: E0229 19:17:17.430912    4095 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nj5h7" podUID="c53f2987-829e-4bea-8075-16af3a59249f"
	Feb 29 19:17:28 no-preload-247197 kubelet[4095]: E0229 19:17:28.429295    4095 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nj5h7" podUID="c53f2987-829e-4bea-8075-16af3a59249f"
	
	
	==> storage-provisioner [c77d304aa104bed3e19645c4702145c26a9b0c8580d21f085e5eac06fa73ca2c] <==
	I0229 19:04:22.525311       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0229 19:04:22.538520       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0229 19:04:22.538631       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0229 19:04:22.546862       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0229 19:04:22.547049       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-247197_53d8800c-14e3-4c7d-ab0a-ad66790b746b!
	I0229 19:04:22.548893       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4d1ca69e-678d-46db-bce2-7a4947442015", APIVersion:"v1", ResourceVersion:"459", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-247197_53d8800c-14e3-4c7d-ab0a-ad66790b746b became leader
	I0229 19:04:22.650285       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-247197_53d8800c-14e3-4c7d-ab0a-ad66790b746b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-247197 -n no-preload-247197
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-247197 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-nj5h7
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-247197 describe pod metrics-server-57f55c9bc5-nj5h7
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-247197 describe pod metrics-server-57f55c9bc5-nj5h7: exit status 1 (79.118361ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-nj5h7" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-247197 describe pod metrics-server-57f55c9bc5-nj5h7: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (104.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
E0229 19:15:46.839812   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/addons-848237/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.214:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.214:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-631080 -n old-k8s-version-631080
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-631080 -n old-k8s-version-631080: exit status 2 (243.091637ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-631080" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-631080 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-631080 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.36µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-631080 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-631080 -n old-k8s-version-631080
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-631080 -n old-k8s-version-631080: exit status 2 (233.379321ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-631080 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-631080 logs -n 25: (1.735358939s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p kubernetes-upgrade-541086                           | kubernetes-upgrade-541086    | jenkins | v1.32.0 | 29 Feb 24 18:47 UTC | 29 Feb 24 18:47 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-541086                           | kubernetes-upgrade-541086    | jenkins | v1.32.0 | 29 Feb 24 18:47 UTC | 29 Feb 24 18:47 UTC |
	| start   | -p no-preload-247197                                   | no-preload-247197            | jenkins | v1.32.0 | 29 Feb 24 18:47 UTC | 29 Feb 24 18:49 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p pause-848791                                        | pause-848791                 | jenkins | v1.32.0 | 29 Feb 24 18:48 UTC | 29 Feb 24 18:48 UTC |
	| start   | -p embed-certs-991128                                  | embed-certs-991128           | jenkins | v1.32.0 | 29 Feb 24 18:48 UTC | 29 Feb 24 18:49 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-393248                              | cert-expiration-393248       | jenkins | v1.32.0 | 29 Feb 24 18:49 UTC | 29 Feb 24 18:49 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-393248                              | cert-expiration-393248       | jenkins | v1.32.0 | 29 Feb 24 18:49 UTC | 29 Feb 24 18:49 UTC |
	| delete  | -p                                                     | disable-driver-mounts-599421 | jenkins | v1.32.0 | 29 Feb 24 18:49 UTC | 29 Feb 24 18:49 UTC |
	|         | disable-driver-mounts-599421                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-153528 | jenkins | v1.32.0 | 29 Feb 24 18:49 UTC | 29 Feb 24 18:50 UTC |
	|         | default-k8s-diff-port-153528                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-247197             | no-preload-247197            | jenkins | v1.32.0 | 29 Feb 24 18:49 UTC | 29 Feb 24 18:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-247197                                   | no-preload-247197            | jenkins | v1.32.0 | 29 Feb 24 18:49 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-991128            | embed-certs-991128           | jenkins | v1.32.0 | 29 Feb 24 18:50 UTC | 29 Feb 24 18:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-991128                                  | embed-certs-991128           | jenkins | v1.32.0 | 29 Feb 24 18:50 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-153528  | default-k8s-diff-port-153528 | jenkins | v1.32.0 | 29 Feb 24 18:51 UTC | 29 Feb 24 18:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-153528 | jenkins | v1.32.0 | 29 Feb 24 18:51 UTC |                     |
	|         | default-k8s-diff-port-153528                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-631080        | old-k8s-version-631080       | jenkins | v1.32.0 | 29 Feb 24 18:51 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-247197                  | no-preload-247197            | jenkins | v1.32.0 | 29 Feb 24 18:52 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-991128                 | embed-certs-991128           | jenkins | v1.32.0 | 29 Feb 24 18:52 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-247197                                   | no-preload-247197            | jenkins | v1.32.0 | 29 Feb 24 18:52 UTC | 29 Feb 24 19:08 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| start   | -p embed-certs-991128                                  | embed-certs-991128           | jenkins | v1.32.0 | 29 Feb 24 18:52 UTC | 29 Feb 24 19:07 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-631080                              | old-k8s-version-631080       | jenkins | v1.32.0 | 29 Feb 24 18:53 UTC | 29 Feb 24 18:53 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-631080             | old-k8s-version-631080       | jenkins | v1.32.0 | 29 Feb 24 18:53 UTC | 29 Feb 24 18:53 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-631080                              | old-k8s-version-631080       | jenkins | v1.32.0 | 29 Feb 24 18:53 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-153528       | default-k8s-diff-port-153528 | jenkins | v1.32.0 | 29 Feb 24 18:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-153528 | jenkins | v1.32.0 | 29 Feb 24 18:53 UTC | 29 Feb 24 19:07 UTC |
	|         | default-k8s-diff-port-153528                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 18:53:39
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 18:53:39.272407   48088 out.go:291] Setting OutFile to fd 1 ...
	I0229 18:53:39.272662   48088 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:53:39.272672   48088 out.go:304] Setting ErrFile to fd 2...
	I0229 18:53:39.272676   48088 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:53:39.272900   48088 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6428/.minikube/bin
	I0229 18:53:39.273517   48088 out.go:298] Setting JSON to false
	I0229 18:53:39.274405   48088 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5763,"bootTime":1709227056,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 18:53:39.274466   48088 start.go:139] virtualization: kvm guest
	I0229 18:53:39.276633   48088 out.go:177] * [default-k8s-diff-port-153528] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 18:53:39.278195   48088 out.go:177]   - MINIKUBE_LOCATION=18259
	I0229 18:53:39.278144   48088 notify.go:220] Checking for updates...
	I0229 18:53:39.280040   48088 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 18:53:39.281568   48088 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18259-6428/kubeconfig
	I0229 18:53:39.282972   48088 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6428/.minikube
	I0229 18:53:39.284383   48088 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 18:53:39.285858   48088 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 18:53:39.287467   48088 config.go:182] Loaded profile config "default-k8s-diff-port-153528": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 18:53:39.287851   48088 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 18:53:39.287889   48088 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:53:39.302503   48088 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39523
	I0229 18:53:39.302895   48088 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:53:39.303402   48088 main.go:141] libmachine: Using API Version  1
	I0229 18:53:39.303427   48088 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:53:39.303737   48088 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:53:39.303893   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .DriverName
	I0229 18:53:39.304118   48088 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 18:53:39.304507   48088 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 18:53:39.304554   48088 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:53:39.318572   48088 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41347
	I0229 18:53:39.318978   48088 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:53:39.319454   48088 main.go:141] libmachine: Using API Version  1
	I0229 18:53:39.319482   48088 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:53:39.319748   48088 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:53:39.319924   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .DriverName
	I0229 18:53:39.351526   48088 out.go:177] * Using the kvm2 driver based on existing profile
	I0229 18:53:39.352970   48088 start.go:299] selected driver: kvm2
	I0229 18:53:39.352988   48088 start.go:903] validating driver "kvm2" against &{Name:default-k8s-diff-port-153528 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-153528 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.210 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:53:39.353115   48088 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 18:53:39.353788   48088 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:53:39.353869   48088 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18259-6428/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 18:53:39.369184   48088 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 18:53:39.369569   48088 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0229 18:53:39.369647   48088 cni.go:84] Creating CNI manager for ""
	I0229 18:53:39.369664   48088 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 18:53:39.369679   48088 start_flags.go:323] config:
	{Name:default-k8s-diff-port-153528 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-15352
8 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.210 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:53:39.369878   48088 iso.go:125] acquiring lock: {Name:mk4b55cee5696fff78a1389276e4e011ad655e56 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:53:39.372634   48088 out.go:177] * Starting control plane node default-k8s-diff-port-153528 in cluster default-k8s-diff-port-153528
	I0229 18:53:41.043270   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:53:39.373930   48088 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0229 18:53:39.373998   48088 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0229 18:53:39.374011   48088 cache.go:56] Caching tarball of preloaded images
	I0229 18:53:39.374104   48088 preload.go:174] Found /home/jenkins/minikube-integration/18259-6428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0229 18:53:39.374116   48088 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0229 18:53:39.374227   48088 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/default-k8s-diff-port-153528/config.json ...
	I0229 18:53:39.374456   48088 start.go:365] acquiring machines lock for default-k8s-diff-port-153528: {Name:mk08b242c6b6b7cd4a6843a7d3821a8b435540dd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 18:53:44.115305   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:53:50.195317   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:53:53.267316   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:53:59.347225   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:54:02.419258   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:54:08.499302   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:54:11.571267   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:54:17.651296   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:54:20.723290   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:54:26.803304   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:54:29.875293   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:54:35.955253   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:54:39.027319   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:54:45.107197   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:54:48.179318   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:54:54.259261   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:54:57.331310   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:55:03.411271   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:55:06.483320   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:55:12.563270   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:55:15.635250   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:55:21.715338   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:55:24.787238   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:55:30.867305   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:55:33.939296   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:55:40.019217   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:55:43.091236   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:55:49.171281   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:55:52.243241   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:55:58.323315   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:56:01.395368   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:56:07.475286   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:56:10.547288   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:56:16.627301   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:56:19.699291   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:56:25.779304   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:56:28.851346   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:56:34.931303   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:56:38.003301   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:56:44.083295   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:56:47.155306   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:56:53.235287   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:56:56.307311   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:57:02.387296   47515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.72:22: connect: no route to host
	I0229 18:57:05.391079   47608 start.go:369] acquired machines lock for "embed-certs-991128" in 4m30.01926313s
	I0229 18:57:05.391125   47608 start.go:96] Skipping create...Using existing machine configuration
	I0229 18:57:05.391130   47608 fix.go:54] fixHost starting: 
	I0229 18:57:05.391473   47608 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 18:57:05.391502   47608 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:57:05.406385   47608 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38019
	I0229 18:57:05.406855   47608 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:57:05.407342   47608 main.go:141] libmachine: Using API Version  1
	I0229 18:57:05.407366   47608 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:57:05.407730   47608 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:57:05.407939   47608 main.go:141] libmachine: (embed-certs-991128) Calling .DriverName
	I0229 18:57:05.408088   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetState
	I0229 18:57:05.409862   47608 fix.go:102] recreateIfNeeded on embed-certs-991128: state=Stopped err=<nil>
	I0229 18:57:05.409895   47608 main.go:141] libmachine: (embed-certs-991128) Calling .DriverName
	W0229 18:57:05.410005   47608 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 18:57:05.411812   47608 out.go:177] * Restarting existing kvm2 VM for "embed-certs-991128" ...
	I0229 18:57:05.389096   47515 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 18:57:05.389139   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHHostname
	I0229 18:57:05.390953   47515 machine.go:91] provisioned docker machine in 4m37.390712428s
	I0229 18:57:05.390991   47515 fix.go:56] fixHost completed within 4m37.410903519s
	I0229 18:57:05.390997   47515 start.go:83] releasing machines lock for "no-preload-247197", held for 4m37.410926595s
	W0229 18:57:05.391017   47515 start.go:694] error starting host: provision: host is not running
	W0229 18:57:05.391155   47515 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0229 18:57:05.391169   47515 start.go:709] Will try again in 5 seconds ...
	I0229 18:57:05.413295   47608 main.go:141] libmachine: (embed-certs-991128) Calling .Start
	I0229 18:57:05.413478   47608 main.go:141] libmachine: (embed-certs-991128) Ensuring networks are active...
	I0229 18:57:05.414184   47608 main.go:141] libmachine: (embed-certs-991128) Ensuring network default is active
	I0229 18:57:05.414495   47608 main.go:141] libmachine: (embed-certs-991128) Ensuring network mk-embed-certs-991128 is active
	I0229 18:57:05.414834   47608 main.go:141] libmachine: (embed-certs-991128) Getting domain xml...
	I0229 18:57:05.415508   47608 main.go:141] libmachine: (embed-certs-991128) Creating domain...
	I0229 18:57:06.606675   47608 main.go:141] libmachine: (embed-certs-991128) Waiting to get IP...
	I0229 18:57:06.607445   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:06.607771   47608 main.go:141] libmachine: (embed-certs-991128) DBG | unable to find current IP address of domain embed-certs-991128 in network mk-embed-certs-991128
	I0229 18:57:06.607826   47608 main.go:141] libmachine: (embed-certs-991128) DBG | I0229 18:57:06.607762   48607 retry.go:31] will retry after 250.745087ms: waiting for machine to come up
	I0229 18:57:06.860293   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:06.860711   47608 main.go:141] libmachine: (embed-certs-991128) DBG | unable to find current IP address of domain embed-certs-991128 in network mk-embed-certs-991128
	I0229 18:57:06.860738   47608 main.go:141] libmachine: (embed-certs-991128) DBG | I0229 18:57:06.860671   48607 retry.go:31] will retry after 259.096096ms: waiting for machine to come up
	I0229 18:57:07.121033   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:07.121429   47608 main.go:141] libmachine: (embed-certs-991128) DBG | unable to find current IP address of domain embed-certs-991128 in network mk-embed-certs-991128
	I0229 18:57:07.121458   47608 main.go:141] libmachine: (embed-certs-991128) DBG | I0229 18:57:07.121381   48607 retry.go:31] will retry after 318.126905ms: waiting for machine to come up
	I0229 18:57:07.440859   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:07.441299   47608 main.go:141] libmachine: (embed-certs-991128) DBG | unable to find current IP address of domain embed-certs-991128 in network mk-embed-certs-991128
	I0229 18:57:07.441328   47608 main.go:141] libmachine: (embed-certs-991128) DBG | I0229 18:57:07.441243   48607 retry.go:31] will retry after 570.321317ms: waiting for machine to come up
	I0229 18:57:08.012896   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:08.013331   47608 main.go:141] libmachine: (embed-certs-991128) DBG | unable to find current IP address of domain embed-certs-991128 in network mk-embed-certs-991128
	I0229 18:57:08.013367   47608 main.go:141] libmachine: (embed-certs-991128) DBG | I0229 18:57:08.013295   48607 retry.go:31] will retry after 489.540139ms: waiting for machine to come up
	I0229 18:57:08.503916   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:08.504321   47608 main.go:141] libmachine: (embed-certs-991128) DBG | unable to find current IP address of domain embed-certs-991128 in network mk-embed-certs-991128
	I0229 18:57:08.504358   47608 main.go:141] libmachine: (embed-certs-991128) DBG | I0229 18:57:08.504269   48607 retry.go:31] will retry after 929.011093ms: waiting for machine to come up
	I0229 18:57:09.435395   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:09.435803   47608 main.go:141] libmachine: (embed-certs-991128) DBG | unable to find current IP address of domain embed-certs-991128 in network mk-embed-certs-991128
	I0229 18:57:09.435851   47608 main.go:141] libmachine: (embed-certs-991128) DBG | I0229 18:57:09.435761   48607 retry.go:31] will retry after 1.087849565s: waiting for machine to come up
	I0229 18:57:10.391806   47515 start.go:365] acquiring machines lock for no-preload-247197: {Name:mk08b242c6b6b7cd4a6843a7d3821a8b435540dd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 18:57:10.525247   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:10.525663   47608 main.go:141] libmachine: (embed-certs-991128) DBG | unable to find current IP address of domain embed-certs-991128 in network mk-embed-certs-991128
	I0229 18:57:10.525697   47608 main.go:141] libmachine: (embed-certs-991128) DBG | I0229 18:57:10.525612   48607 retry.go:31] will retry after 954.10405ms: waiting for machine to come up
	I0229 18:57:11.481162   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:11.481610   47608 main.go:141] libmachine: (embed-certs-991128) DBG | unable to find current IP address of domain embed-certs-991128 in network mk-embed-certs-991128
	I0229 18:57:11.481640   47608 main.go:141] libmachine: (embed-certs-991128) DBG | I0229 18:57:11.481558   48607 retry.go:31] will retry after 1.495484693s: waiting for machine to come up
	I0229 18:57:12.979123   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:12.979547   47608 main.go:141] libmachine: (embed-certs-991128) DBG | unable to find current IP address of domain embed-certs-991128 in network mk-embed-certs-991128
	I0229 18:57:12.979572   47608 main.go:141] libmachine: (embed-certs-991128) DBG | I0229 18:57:12.979499   48607 retry.go:31] will retry after 2.307927756s: waiting for machine to come up
	I0229 18:57:15.288445   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:15.288841   47608 main.go:141] libmachine: (embed-certs-991128) DBG | unable to find current IP address of domain embed-certs-991128 in network mk-embed-certs-991128
	I0229 18:57:15.288871   47608 main.go:141] libmachine: (embed-certs-991128) DBG | I0229 18:57:15.288785   48607 retry.go:31] will retry after 2.89615753s: waiting for machine to come up
	I0229 18:57:18.188102   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:18.188474   47608 main.go:141] libmachine: (embed-certs-991128) DBG | unable to find current IP address of domain embed-certs-991128 in network mk-embed-certs-991128
	I0229 18:57:18.188504   47608 main.go:141] libmachine: (embed-certs-991128) DBG | I0229 18:57:18.188426   48607 retry.go:31] will retry after 3.511036368s: waiting for machine to come up
	I0229 18:57:21.701039   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:21.701395   47608 main.go:141] libmachine: (embed-certs-991128) DBG | unable to find current IP address of domain embed-certs-991128 in network mk-embed-certs-991128
	I0229 18:57:21.701425   47608 main.go:141] libmachine: (embed-certs-991128) DBG | I0229 18:57:21.701356   48607 retry.go:31] will retry after 3.516537008s: waiting for machine to come up
	I0229 18:57:25.220199   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:25.220641   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has current primary IP address 192.168.61.34 and MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:25.220655   47608 main.go:141] libmachine: (embed-certs-991128) Found IP for machine: 192.168.61.34
	I0229 18:57:25.220663   47608 main.go:141] libmachine: (embed-certs-991128) Reserving static IP address...
	I0229 18:57:25.221122   47608 main.go:141] libmachine: (embed-certs-991128) Reserved static IP address: 192.168.61.34
	I0229 18:57:25.221162   47608 main.go:141] libmachine: (embed-certs-991128) DBG | found host DHCP lease matching {name: "embed-certs-991128", mac: "52:54:00:44:76:e2", ip: "192.168.61.34"} in network mk-embed-certs-991128: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:29 +0000 UTC Type:0 Mac:52:54:00:44:76:e2 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:embed-certs-991128 Clientid:01:52:54:00:44:76:e2}
	I0229 18:57:25.221179   47608 main.go:141] libmachine: (embed-certs-991128) Waiting for SSH to be available...
	I0229 18:57:25.221222   47608 main.go:141] libmachine: (embed-certs-991128) DBG | skip adding static IP to network mk-embed-certs-991128 - found existing host DHCP lease matching {name: "embed-certs-991128", mac: "52:54:00:44:76:e2", ip: "192.168.61.34"}
	I0229 18:57:25.221243   47608 main.go:141] libmachine: (embed-certs-991128) DBG | Getting to WaitForSSH function...
	I0229 18:57:25.223450   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:25.223775   47608 main.go:141] libmachine: (embed-certs-991128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:76:e2", ip: ""} in network mk-embed-certs-991128: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:29 +0000 UTC Type:0 Mac:52:54:00:44:76:e2 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:embed-certs-991128 Clientid:01:52:54:00:44:76:e2}
	I0229 18:57:25.223809   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined IP address 192.168.61.34 and MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:25.223951   47608 main.go:141] libmachine: (embed-certs-991128) DBG | Using SSH client type: external
	I0229 18:57:25.223981   47608 main.go:141] libmachine: (embed-certs-991128) DBG | Using SSH private key: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/embed-certs-991128/id_rsa (-rw-------)
	I0229 18:57:25.224014   47608 main.go:141] libmachine: (embed-certs-991128) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.34 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18259-6428/.minikube/machines/embed-certs-991128/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 18:57:25.224032   47608 main.go:141] libmachine: (embed-certs-991128) DBG | About to run SSH command:
	I0229 18:57:25.224052   47608 main.go:141] libmachine: (embed-certs-991128) DBG | exit 0
	I0229 18:57:26.464131   47919 start.go:369] acquired machines lock for "old-k8s-version-631080" in 4m11.42071391s
	I0229 18:57:26.464193   47919 start.go:96] Skipping create...Using existing machine configuration
	I0229 18:57:26.464200   47919 fix.go:54] fixHost starting: 
	I0229 18:57:26.464621   47919 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 18:57:26.464657   47919 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:57:26.480155   47919 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46439
	I0229 18:57:26.480488   47919 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:57:26.481000   47919 main.go:141] libmachine: Using API Version  1
	I0229 18:57:26.481027   47919 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:57:26.481327   47919 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:57:26.481514   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .DriverName
	I0229 18:57:26.481669   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetState
	I0229 18:57:26.482869   47919 fix.go:102] recreateIfNeeded on old-k8s-version-631080: state=Stopped err=<nil>
	I0229 18:57:26.482885   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .DriverName
	W0229 18:57:26.483052   47919 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 18:57:26.485421   47919 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-631080" ...
	I0229 18:57:25.351081   47608 main.go:141] libmachine: (embed-certs-991128) DBG | SSH cmd err, output: <nil>: 
	I0229 18:57:25.351434   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetConfigRaw
	I0229 18:57:25.352022   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetIP
	I0229 18:57:25.354349   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:25.354705   47608 main.go:141] libmachine: (embed-certs-991128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:76:e2", ip: ""} in network mk-embed-certs-991128: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:29 +0000 UTC Type:0 Mac:52:54:00:44:76:e2 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:embed-certs-991128 Clientid:01:52:54:00:44:76:e2}
	I0229 18:57:25.354734   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined IP address 192.168.61.34 and MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:25.354944   47608 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/embed-certs-991128/config.json ...
	I0229 18:57:25.355150   47608 machine.go:88] provisioning docker machine ...
	I0229 18:57:25.355169   47608 main.go:141] libmachine: (embed-certs-991128) Calling .DriverName
	I0229 18:57:25.355351   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetMachineName
	I0229 18:57:25.355501   47608 buildroot.go:166] provisioning hostname "embed-certs-991128"
	I0229 18:57:25.355528   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetMachineName
	I0229 18:57:25.355763   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHHostname
	I0229 18:57:25.357784   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:25.358109   47608 main.go:141] libmachine: (embed-certs-991128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:76:e2", ip: ""} in network mk-embed-certs-991128: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:29 +0000 UTC Type:0 Mac:52:54:00:44:76:e2 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:embed-certs-991128 Clientid:01:52:54:00:44:76:e2}
	I0229 18:57:25.358134   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined IP address 192.168.61.34 and MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:25.358265   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHPort
	I0229 18:57:25.358429   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHKeyPath
	I0229 18:57:25.358567   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHKeyPath
	I0229 18:57:25.358683   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHUsername
	I0229 18:57:25.358840   47608 main.go:141] libmachine: Using SSH client type: native
	I0229 18:57:25.359062   47608 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.34 22 <nil> <nil>}
	I0229 18:57:25.359078   47608 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-991128 && echo "embed-certs-991128" | sudo tee /etc/hostname
	I0229 18:57:25.487161   47608 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-991128
	
	I0229 18:57:25.487197   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHHostname
	I0229 18:57:25.489979   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:25.490275   47608 main.go:141] libmachine: (embed-certs-991128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:76:e2", ip: ""} in network mk-embed-certs-991128: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:29 +0000 UTC Type:0 Mac:52:54:00:44:76:e2 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:embed-certs-991128 Clientid:01:52:54:00:44:76:e2}
	I0229 18:57:25.490308   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined IP address 192.168.61.34 and MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:25.490539   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHPort
	I0229 18:57:25.490755   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHKeyPath
	I0229 18:57:25.490908   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHKeyPath
	I0229 18:57:25.491047   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHUsername
	I0229 18:57:25.491191   47608 main.go:141] libmachine: Using SSH client type: native
	I0229 18:57:25.491377   47608 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.34 22 <nil> <nil>}
	I0229 18:57:25.491405   47608 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-991128' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-991128/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-991128' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 18:57:25.617911   47608 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 18:57:25.617941   47608 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18259-6428/.minikube CaCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18259-6428/.minikube}
	I0229 18:57:25.617961   47608 buildroot.go:174] setting up certificates
	I0229 18:57:25.617971   47608 provision.go:83] configureAuth start
	I0229 18:57:25.617980   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetMachineName
	I0229 18:57:25.618235   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetIP
	I0229 18:57:25.620943   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:25.621286   47608 main.go:141] libmachine: (embed-certs-991128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:76:e2", ip: ""} in network mk-embed-certs-991128: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:29 +0000 UTC Type:0 Mac:52:54:00:44:76:e2 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:embed-certs-991128 Clientid:01:52:54:00:44:76:e2}
	I0229 18:57:25.621318   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined IP address 192.168.61.34 and MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:25.621460   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHHostname
	I0229 18:57:25.623629   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:25.623936   47608 main.go:141] libmachine: (embed-certs-991128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:76:e2", ip: ""} in network mk-embed-certs-991128: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:29 +0000 UTC Type:0 Mac:52:54:00:44:76:e2 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:embed-certs-991128 Clientid:01:52:54:00:44:76:e2}
	I0229 18:57:25.623961   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined IP address 192.168.61.34 and MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:25.624074   47608 provision.go:138] copyHostCerts
	I0229 18:57:25.624133   47608 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem, removing ...
	I0229 18:57:25.624154   47608 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem
	I0229 18:57:25.624240   47608 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem (1082 bytes)
	I0229 18:57:25.624344   47608 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem, removing ...
	I0229 18:57:25.624355   47608 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem
	I0229 18:57:25.624383   47608 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem (1123 bytes)
	I0229 18:57:25.624455   47608 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem, removing ...
	I0229 18:57:25.624462   47608 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem
	I0229 18:57:25.624483   47608 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem (1675 bytes)
	I0229 18:57:25.624538   47608 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem org=jenkins.embed-certs-991128 san=[192.168.61.34 192.168.61.34 localhost 127.0.0.1 minikube embed-certs-991128]
	I0229 18:57:25.757225   47608 provision.go:172] copyRemoteCerts
	I0229 18:57:25.757278   47608 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 18:57:25.757301   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHHostname
	I0229 18:57:25.759794   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:25.760098   47608 main.go:141] libmachine: (embed-certs-991128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:76:e2", ip: ""} in network mk-embed-certs-991128: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:29 +0000 UTC Type:0 Mac:52:54:00:44:76:e2 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:embed-certs-991128 Clientid:01:52:54:00:44:76:e2}
	I0229 18:57:25.760125   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined IP address 192.168.61.34 and MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:25.760287   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHPort
	I0229 18:57:25.760488   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHKeyPath
	I0229 18:57:25.760664   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHUsername
	I0229 18:57:25.760798   47608 sshutil.go:53] new ssh client: &{IP:192.168.61.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/embed-certs-991128/id_rsa Username:docker}
	I0229 18:57:25.849527   47608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0229 18:57:25.875673   47608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 18:57:25.902046   47608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0229 18:57:25.927830   47608 provision.go:86] duration metric: configureAuth took 309.850774ms
	I0229 18:57:25.927862   47608 buildroot.go:189] setting minikube options for container-runtime
	I0229 18:57:25.928081   47608 config.go:182] Loaded profile config "embed-certs-991128": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 18:57:25.928163   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHHostname
	I0229 18:57:25.930565   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:25.930917   47608 main.go:141] libmachine: (embed-certs-991128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:76:e2", ip: ""} in network mk-embed-certs-991128: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:29 +0000 UTC Type:0 Mac:52:54:00:44:76:e2 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:embed-certs-991128 Clientid:01:52:54:00:44:76:e2}
	I0229 18:57:25.930945   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined IP address 192.168.61.34 and MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:25.931135   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHPort
	I0229 18:57:25.931336   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHKeyPath
	I0229 18:57:25.931493   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHKeyPath
	I0229 18:57:25.931649   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHUsername
	I0229 18:57:25.931806   47608 main.go:141] libmachine: Using SSH client type: native
	I0229 18:57:25.932003   47608 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.34 22 <nil> <nil>}
	I0229 18:57:25.932026   47608 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0229 18:57:26.205080   47608 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0229 18:57:26.205139   47608 machine.go:91] provisioned docker machine in 849.974413ms
	I0229 18:57:26.205154   47608 start.go:300] post-start starting for "embed-certs-991128" (driver="kvm2")
	I0229 18:57:26.205168   47608 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 18:57:26.205191   47608 main.go:141] libmachine: (embed-certs-991128) Calling .DriverName
	I0229 18:57:26.205537   47608 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 18:57:26.205568   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHHostname
	I0229 18:57:26.208107   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:26.208417   47608 main.go:141] libmachine: (embed-certs-991128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:76:e2", ip: ""} in network mk-embed-certs-991128: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:29 +0000 UTC Type:0 Mac:52:54:00:44:76:e2 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:embed-certs-991128 Clientid:01:52:54:00:44:76:e2}
	I0229 18:57:26.208443   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined IP address 192.168.61.34 and MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:26.208625   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHPort
	I0229 18:57:26.208804   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHKeyPath
	I0229 18:57:26.208975   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHUsername
	I0229 18:57:26.209084   47608 sshutil.go:53] new ssh client: &{IP:192.168.61.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/embed-certs-991128/id_rsa Username:docker}
	I0229 18:57:26.303090   47608 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 18:57:26.309522   47608 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 18:57:26.309543   47608 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6428/.minikube/addons for local assets ...
	I0229 18:57:26.309609   47608 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6428/.minikube/files for local assets ...
	I0229 18:57:26.309697   47608 filesync.go:149] local asset: /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem -> 136512.pem in /etc/ssl/certs
	I0229 18:57:26.309800   47608 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 18:57:26.319897   47608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem --> /etc/ssl/certs/136512.pem (1708 bytes)
	I0229 18:57:26.346220   47608 start.go:303] post-start completed in 141.055399ms
	I0229 18:57:26.346242   47608 fix.go:56] fixHost completed within 20.955110287s
	I0229 18:57:26.346265   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHHostname
	I0229 18:57:26.348878   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:26.349237   47608 main.go:141] libmachine: (embed-certs-991128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:76:e2", ip: ""} in network mk-embed-certs-991128: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:29 +0000 UTC Type:0 Mac:52:54:00:44:76:e2 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:embed-certs-991128 Clientid:01:52:54:00:44:76:e2}
	I0229 18:57:26.349278   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined IP address 192.168.61.34 and MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:26.349415   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHPort
	I0229 18:57:26.349591   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHKeyPath
	I0229 18:57:26.349742   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHKeyPath
	I0229 18:57:26.349860   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHUsername
	I0229 18:57:26.350032   47608 main.go:141] libmachine: Using SSH client type: native
	I0229 18:57:26.350224   47608 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.34 22 <nil> <nil>}
	I0229 18:57:26.350235   47608 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 18:57:26.463992   47608 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709233046.436502673
	
	I0229 18:57:26.464017   47608 fix.go:206] guest clock: 1709233046.436502673
	I0229 18:57:26.464027   47608 fix.go:219] Guest: 2024-02-29 18:57:26.436502673 +0000 UTC Remote: 2024-02-29 18:57:26.346246091 +0000 UTC m=+291.120011459 (delta=90.256582ms)
	I0229 18:57:26.464055   47608 fix.go:190] guest clock delta is within tolerance: 90.256582ms
	I0229 18:57:26.464062   47608 start.go:83] releasing machines lock for "embed-certs-991128", held for 21.072955529s
	I0229 18:57:26.464099   47608 main.go:141] libmachine: (embed-certs-991128) Calling .DriverName
	I0229 18:57:26.464362   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetIP
	I0229 18:57:26.466954   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:26.467308   47608 main.go:141] libmachine: (embed-certs-991128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:76:e2", ip: ""} in network mk-embed-certs-991128: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:29 +0000 UTC Type:0 Mac:52:54:00:44:76:e2 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:embed-certs-991128 Clientid:01:52:54:00:44:76:e2}
	I0229 18:57:26.467350   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined IP address 192.168.61.34 and MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:26.467452   47608 main.go:141] libmachine: (embed-certs-991128) Calling .DriverName
	I0229 18:57:26.468058   47608 main.go:141] libmachine: (embed-certs-991128) Calling .DriverName
	I0229 18:57:26.468227   47608 main.go:141] libmachine: (embed-certs-991128) Calling .DriverName
	I0229 18:57:26.468287   47608 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 18:57:26.468356   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHHostname
	I0229 18:57:26.468456   47608 ssh_runner.go:195] Run: cat /version.json
	I0229 18:57:26.468477   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHHostname
	I0229 18:57:26.470917   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:26.470996   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:26.471291   47608 main.go:141] libmachine: (embed-certs-991128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:76:e2", ip: ""} in network mk-embed-certs-991128: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:29 +0000 UTC Type:0 Mac:52:54:00:44:76:e2 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:embed-certs-991128 Clientid:01:52:54:00:44:76:e2}
	I0229 18:57:26.471322   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined IP address 192.168.61.34 and MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:26.471352   47608 main.go:141] libmachine: (embed-certs-991128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:76:e2", ip: ""} in network mk-embed-certs-991128: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:29 +0000 UTC Type:0 Mac:52:54:00:44:76:e2 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:embed-certs-991128 Clientid:01:52:54:00:44:76:e2}
	I0229 18:57:26.471369   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined IP address 192.168.61.34 and MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:26.471562   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHPort
	I0229 18:57:26.471602   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHPort
	I0229 18:57:26.471719   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHKeyPath
	I0229 18:57:26.471783   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHKeyPath
	I0229 18:57:26.471873   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHUsername
	I0229 18:57:26.471940   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHUsername
	I0229 18:57:26.472005   47608 sshutil.go:53] new ssh client: &{IP:192.168.61.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/embed-certs-991128/id_rsa Username:docker}
	I0229 18:57:26.472095   47608 sshutil.go:53] new ssh client: &{IP:192.168.61.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/embed-certs-991128/id_rsa Username:docker}
	I0229 18:57:26.560629   47608 ssh_runner.go:195] Run: systemctl --version
	I0229 18:57:26.587852   47608 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0229 18:57:26.752819   47608 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 18:57:26.760557   47608 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 18:57:26.760629   47608 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 18:57:26.778065   47608 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 18:57:26.778096   47608 start.go:475] detecting cgroup driver to use...
	I0229 18:57:26.778156   47608 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 18:57:26.795970   47608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 18:57:26.810591   47608 docker.go:217] disabling cri-docker service (if available) ...
	I0229 18:57:26.810634   47608 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 18:57:26.826715   47608 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 18:57:26.840879   47608 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 18:57:26.959536   47608 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 18:57:27.143802   47608 docker.go:233] disabling docker service ...
	I0229 18:57:27.143856   47608 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 18:57:27.164748   47608 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 18:57:27.183161   47608 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 18:57:27.322659   47608 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 18:57:27.471650   47608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 18:57:27.489290   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 18:57:27.512706   47608 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0229 18:57:27.512770   47608 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:57:27.524596   47608 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0229 18:57:27.524657   47608 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:57:27.536202   47608 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:57:27.547343   47608 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:57:27.558390   47608 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 18:57:27.571297   47608 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 18:57:27.580859   47608 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 18:57:27.580903   47608 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0229 18:57:27.595324   47608 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 18:57:27.606130   47608 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:57:27.736363   47608 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0229 18:57:27.877719   47608 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0229 18:57:27.877804   47608 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0229 18:57:27.882920   47608 start.go:543] Will wait 60s for crictl version
	I0229 18:57:27.883035   47608 ssh_runner.go:195] Run: which crictl
	I0229 18:57:27.887132   47608 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 18:57:27.925964   47608 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0229 18:57:27.926061   47608 ssh_runner.go:195] Run: crio --version
	I0229 18:57:27.958046   47608 ssh_runner.go:195] Run: crio --version
	I0229 18:57:27.991575   47608 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0229 18:57:26.486586   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .Start
	I0229 18:57:26.486734   47919 main.go:141] libmachine: (old-k8s-version-631080) Ensuring networks are active...
	I0229 18:57:26.487377   47919 main.go:141] libmachine: (old-k8s-version-631080) Ensuring network default is active
	I0229 18:57:26.487679   47919 main.go:141] libmachine: (old-k8s-version-631080) Ensuring network mk-old-k8s-version-631080 is active
	I0229 18:57:26.488006   47919 main.go:141] libmachine: (old-k8s-version-631080) Getting domain xml...
	I0229 18:57:26.488624   47919 main.go:141] libmachine: (old-k8s-version-631080) Creating domain...
	I0229 18:57:27.689480   47919 main.go:141] libmachine: (old-k8s-version-631080) Waiting to get IP...
	I0229 18:57:27.690414   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:27.690858   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:57:27.690932   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:57:27.690848   48724 retry.go:31] will retry after 309.860592ms: waiting for machine to come up
	I0229 18:57:28.002437   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:28.002926   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:57:28.002959   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:57:28.002884   48724 retry.go:31] will retry after 298.018759ms: waiting for machine to come up
	I0229 18:57:28.302325   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:28.302849   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:57:28.302879   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:57:28.302801   48724 retry.go:31] will retry after 312.821928ms: waiting for machine to come up
	I0229 18:57:28.617315   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:28.617797   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:57:28.617831   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:57:28.617753   48724 retry.go:31] will retry after 373.960028ms: waiting for machine to come up
	I0229 18:57:28.993230   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:28.993860   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:57:28.993881   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:57:28.993809   48724 retry.go:31] will retry after 516.423282ms: waiting for machine to come up
	I0229 18:57:29.512208   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:29.512683   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:57:29.512718   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:57:29.512651   48724 retry.go:31] will retry after 776.839747ms: waiting for machine to come up
	I0229 18:57:27.992835   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetIP
	I0229 18:57:27.995847   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:27.996225   47608 main.go:141] libmachine: (embed-certs-991128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:76:e2", ip: ""} in network mk-embed-certs-991128: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:29 +0000 UTC Type:0 Mac:52:54:00:44:76:e2 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:embed-certs-991128 Clientid:01:52:54:00:44:76:e2}
	I0229 18:57:27.996255   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined IP address 192.168.61.34 and MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 18:57:27.996483   47608 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0229 18:57:28.001148   47608 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:57:28.016232   47608 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0229 18:57:28.016293   47608 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 18:57:28.055181   47608 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0229 18:57:28.055248   47608 ssh_runner.go:195] Run: which lz4
	I0229 18:57:28.059680   47608 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0229 18:57:28.064299   47608 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 18:57:28.064330   47608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0229 18:57:29.988576   47608 crio.go:444] Took 1.928948 seconds to copy over tarball
	I0229 18:57:29.988670   47608 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 18:57:30.290748   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:30.291228   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:57:30.291276   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:57:30.291195   48724 retry.go:31] will retry after 846.002471ms: waiting for machine to come up
	I0229 18:57:31.139734   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:31.140157   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:57:31.140177   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:57:31.140114   48724 retry.go:31] will retry after 1.01688411s: waiting for machine to come up
	I0229 18:57:32.158306   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:32.158845   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:57:32.158868   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:57:32.158827   48724 retry.go:31] will retry after 1.217119434s: waiting for machine to come up
	I0229 18:57:33.377121   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:33.377508   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:57:33.377538   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:57:33.377475   48724 retry.go:31] will retry after 1.566910779s: waiting for machine to come up
	I0229 18:57:32.844311   47608 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.855608287s)
	I0229 18:57:32.844344   47608 crio.go:451] Took 2.855747 seconds to extract the tarball
	I0229 18:57:32.844356   47608 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 18:57:32.890199   47608 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 18:57:32.953328   47608 crio.go:496] all images are preloaded for cri-o runtime.
	I0229 18:57:32.953351   47608 cache_images.go:84] Images are preloaded, skipping loading
	I0229 18:57:32.953408   47608 ssh_runner.go:195] Run: crio config
	I0229 18:57:33.006678   47608 cni.go:84] Creating CNI manager for ""
	I0229 18:57:33.006701   47608 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 18:57:33.006717   47608 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 18:57:33.006734   47608 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.34 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-991128 NodeName:embed-certs-991128 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.34"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.34 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 18:57:33.006872   47608 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.34
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-991128"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.34
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.34"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 18:57:33.006951   47608 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-991128 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.34
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-991128 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 18:57:33.006998   47608 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0229 18:57:33.018746   47608 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 18:57:33.018824   47608 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 18:57:33.029994   47608 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I0229 18:57:33.050522   47608 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 18:57:33.070313   47608 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I0229 18:57:33.091436   47608 ssh_runner.go:195] Run: grep 192.168.61.34	control-plane.minikube.internal$ /etc/hosts
	I0229 18:57:33.096253   47608 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.34	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:57:33.110683   47608 certs.go:56] Setting up /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/embed-certs-991128 for IP: 192.168.61.34
	I0229 18:57:33.110720   47608 certs.go:190] acquiring lock for shared ca certs: {Name:mk3505d2468da66ac20dc3ebf913782cecf1ba0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:57:33.110892   47608 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18259-6428/.minikube/ca.key
	I0229 18:57:33.110957   47608 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.key
	I0229 18:57:33.111075   47608 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/embed-certs-991128/client.key
	I0229 18:57:33.111147   47608 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/embed-certs-991128/apiserver.key.d8cf1313
	I0229 18:57:33.111195   47608 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/embed-certs-991128/proxy-client.key
	I0229 18:57:33.111320   47608 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651.pem (1338 bytes)
	W0229 18:57:33.111352   47608 certs.go:433] ignoring /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651_empty.pem, impossibly tiny 0 bytes
	I0229 18:57:33.111362   47608 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem (1675 bytes)
	I0229 18:57:33.111383   47608 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem (1082 bytes)
	I0229 18:57:33.111406   47608 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem (1123 bytes)
	I0229 18:57:33.111443   47608 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem (1675 bytes)
	I0229 18:57:33.111479   47608 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem (1708 bytes)
	I0229 18:57:33.112071   47608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/embed-certs-991128/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 18:57:33.143498   47608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/embed-certs-991128/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0229 18:57:33.171567   47608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/embed-certs-991128/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 18:57:33.199300   47608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/embed-certs-991128/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0229 18:57:33.226492   47608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 18:57:33.254025   47608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0229 18:57:33.281215   47608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 18:57:33.311188   47608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0229 18:57:33.342138   47608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 18:57:33.373884   47608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651.pem --> /usr/share/ca-certificates/13651.pem (1338 bytes)
	I0229 18:57:33.401130   47608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem --> /usr/share/ca-certificates/136512.pem (1708 bytes)
	I0229 18:57:33.427527   47608 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 18:57:33.446246   47608 ssh_runner.go:195] Run: openssl version
	I0229 18:57:33.455476   47608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13651.pem && ln -fs /usr/share/ca-certificates/13651.pem /etc/ssl/certs/13651.pem"
	I0229 18:57:33.473394   47608 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13651.pem
	I0229 18:57:33.478904   47608 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 17:49 /usr/share/ca-certificates/13651.pem
	I0229 18:57:33.478961   47608 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13651.pem
	I0229 18:57:33.485913   47608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13651.pem /etc/ssl/certs/51391683.0"
	I0229 18:57:33.499458   47608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136512.pem && ln -fs /usr/share/ca-certificates/136512.pem /etc/ssl/certs/136512.pem"
	I0229 18:57:33.512861   47608 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136512.pem
	I0229 18:57:33.518749   47608 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 17:49 /usr/share/ca-certificates/136512.pem
	I0229 18:57:33.518808   47608 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136512.pem
	I0229 18:57:33.525612   47608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136512.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 18:57:33.539397   47608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 18:57:33.552302   47608 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:57:33.557481   47608 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 17:40 /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:57:33.557543   47608 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:57:33.564226   47608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 18:57:33.577315   47608 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 18:57:33.582527   47608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0229 18:57:33.589246   47608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0229 18:57:33.595992   47608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0229 18:57:33.602535   47608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0229 18:57:33.609231   47608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0229 18:57:33.616292   47608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0229 18:57:33.623124   47608 kubeadm.go:404] StartCluster: {Name:embed-certs-991128 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-991128 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.34 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:57:33.623239   47608 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0229 18:57:33.623281   47608 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 18:57:33.663871   47608 cri.go:89] found id: ""
	I0229 18:57:33.663948   47608 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 18:57:33.676484   47608 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0229 18:57:33.676519   47608 kubeadm.go:636] restartCluster start
	I0229 18:57:33.676576   47608 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0229 18:57:33.690000   47608 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:33.690903   47608 kubeconfig.go:92] found "embed-certs-991128" server: "https://192.168.61.34:8443"
	I0229 18:57:33.692909   47608 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0229 18:57:33.706062   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:33.706162   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:33.722166   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:34.206285   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:34.206371   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:34.222736   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:34.706286   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:34.706415   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:34.721170   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:35.206815   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:35.206905   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:35.223777   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:34.946027   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:35.171546   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:57:35.171576   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:57:34.946337   48724 retry.go:31] will retry after 2.169140366s: waiting for machine to come up
	I0229 18:57:37.117080   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:37.117531   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:57:37.117564   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:57:37.117491   48724 retry.go:31] will retry after 2.187461538s: waiting for machine to come up
	I0229 18:57:39.307825   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:39.308159   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:57:39.308199   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:57:39.308131   48724 retry.go:31] will retry after 4.480150028s: waiting for machine to come up
	I0229 18:57:35.706239   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:35.706327   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:35.727095   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:36.206608   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:36.206718   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:36.220509   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:36.707149   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:36.707237   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:36.725852   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:37.206401   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:37.206530   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:37.225323   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:37.706920   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:37.707051   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:37.725340   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:38.207012   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:38.207113   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:38.225343   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:38.706906   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:38.706988   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:38.720820   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:39.206324   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:39.206399   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:39.220757   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:39.706274   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:39.706361   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:39.719994   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:40.206511   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:40.206589   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:40.219998   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:43.790597   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:43.791050   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | unable to find current IP address of domain old-k8s-version-631080 in network mk-old-k8s-version-631080
	I0229 18:57:43.791076   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | I0229 18:57:43.790999   48724 retry.go:31] will retry after 3.830907426s: waiting for machine to come up
	I0229 18:57:40.706115   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:40.706262   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:40.719892   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:41.206440   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:41.206518   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:41.220057   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:41.706585   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:41.706677   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:41.720355   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:42.206977   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:42.207107   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:42.220629   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:42.706185   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:42.706266   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:42.720230   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:43.206832   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:43.206901   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:43.221019   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:43.706611   47608 api_server.go:166] Checking apiserver status ...
	I0229 18:57:43.706693   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:43.720457   47608 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:43.720489   47608 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0229 18:57:43.720501   47608 kubeadm.go:1135] stopping kube-system containers ...
	I0229 18:57:43.720515   47608 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0229 18:57:43.720572   47608 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 18:57:43.757509   47608 cri.go:89] found id: ""
	I0229 18:57:43.757592   47608 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0229 18:57:43.777950   47608 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 18:57:43.788404   47608 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 18:57:43.788455   47608 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 18:57:43.799322   47608 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0229 18:57:43.799340   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:57:43.907052   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:57:44.731907   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:57:44.940317   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:57:45.040382   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:57:45.113335   47608 api_server.go:52] waiting for apiserver process to appear ...
	I0229 18:57:45.113418   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:57:48.808893   48088 start.go:369] acquired machines lock for "default-k8s-diff-port-153528" in 4m9.434383703s
	I0229 18:57:48.808960   48088 start.go:96] Skipping create...Using existing machine configuration
	I0229 18:57:48.808973   48088 fix.go:54] fixHost starting: 
	I0229 18:57:48.809402   48088 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 18:57:48.809445   48088 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:57:48.829022   48088 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41617
	I0229 18:57:48.829448   48088 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:57:48.830097   48088 main.go:141] libmachine: Using API Version  1
	I0229 18:57:48.830129   48088 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:57:48.830547   48088 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:57:48.830766   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .DriverName
	I0229 18:57:48.830918   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetState
	I0229 18:57:48.832707   48088 fix.go:102] recreateIfNeeded on default-k8s-diff-port-153528: state=Stopped err=<nil>
	I0229 18:57:48.832733   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .DriverName
	W0229 18:57:48.832879   48088 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 18:57:48.834969   48088 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-153528" ...
	I0229 18:57:48.836190   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .Start
	I0229 18:57:48.836352   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Ensuring networks are active...
	I0229 18:57:48.837051   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Ensuring network default is active
	I0229 18:57:48.837440   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Ensuring network mk-default-k8s-diff-port-153528 is active
	I0229 18:57:48.837886   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Getting domain xml...
	I0229 18:57:48.838747   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Creating domain...
	I0229 18:57:47.623408   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:47.623861   47919 main.go:141] libmachine: (old-k8s-version-631080) Found IP for machine: 192.168.83.214
	I0229 18:57:47.623891   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has current primary IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:47.623900   47919 main.go:141] libmachine: (old-k8s-version-631080) Reserving static IP address...
	I0229 18:57:47.624340   47919 main.go:141] libmachine: (old-k8s-version-631080) Reserved static IP address: 192.168.83.214
	I0229 18:57:47.624374   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "old-k8s-version-631080", mac: "52:54:00:1b:b2:7e", ip: "192.168.83.214"} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:57:38 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:57:47.624390   47919 main.go:141] libmachine: (old-k8s-version-631080) Waiting for SSH to be available...
	I0229 18:57:47.624419   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | skip adding static IP to network mk-old-k8s-version-631080 - found existing host DHCP lease matching {name: "old-k8s-version-631080", mac: "52:54:00:1b:b2:7e", ip: "192.168.83.214"}
	I0229 18:57:47.624440   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | Getting to WaitForSSH function...
	I0229 18:57:47.626600   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:47.626881   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:57:38 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:57:47.626904   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:47.627042   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | Using SSH client type: external
	I0229 18:57:47.627070   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | Using SSH private key: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/old-k8s-version-631080/id_rsa (-rw-------)
	I0229 18:57:47.627106   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.214 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18259-6428/.minikube/machines/old-k8s-version-631080/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 18:57:47.627127   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | About to run SSH command:
	I0229 18:57:47.627146   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | exit 0
	I0229 18:57:47.751206   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | SSH cmd err, output: <nil>: 
	I0229 18:57:47.751569   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetConfigRaw
	I0229 18:57:47.752158   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetIP
	I0229 18:57:47.754701   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:47.755064   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:57:38 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:57:47.755089   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:47.755331   47919 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/config.json ...
	I0229 18:57:47.755551   47919 machine.go:88] provisioning docker machine ...
	I0229 18:57:47.755569   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .DriverName
	I0229 18:57:47.755772   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetMachineName
	I0229 18:57:47.755961   47919 buildroot.go:166] provisioning hostname "old-k8s-version-631080"
	I0229 18:57:47.755979   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetMachineName
	I0229 18:57:47.756102   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHHostname
	I0229 18:57:47.758421   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:47.758767   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:57:38 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:57:47.758796   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:47.758895   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHPort
	I0229 18:57:47.759065   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:57:47.759233   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:57:47.759387   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHUsername
	I0229 18:57:47.759548   47919 main.go:141] libmachine: Using SSH client type: native
	I0229 18:57:47.759718   47919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.83.214 22 <nil> <nil>}
	I0229 18:57:47.759730   47919 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-631080 && echo "old-k8s-version-631080" | sudo tee /etc/hostname
	I0229 18:57:47.879204   47919 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-631080
	
	I0229 18:57:47.879233   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHHostname
	I0229 18:57:47.881915   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:47.882207   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:57:38 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:57:47.882237   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:47.882407   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHPort
	I0229 18:57:47.882582   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:57:47.882737   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:57:47.882880   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHUsername
	I0229 18:57:47.883053   47919 main.go:141] libmachine: Using SSH client type: native
	I0229 18:57:47.883244   47919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.83.214 22 <nil> <nil>}
	I0229 18:57:47.883262   47919 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-631080' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-631080/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-631080' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 18:57:47.996920   47919 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 18:57:47.996948   47919 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18259-6428/.minikube CaCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18259-6428/.minikube}
	I0229 18:57:47.996964   47919 buildroot.go:174] setting up certificates
	I0229 18:57:47.996972   47919 provision.go:83] configureAuth start
	I0229 18:57:47.996980   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetMachineName
	I0229 18:57:47.997262   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetIP
	I0229 18:57:47.999702   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.000044   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:57:38 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:57:48.000076   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.000207   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHHostname
	I0229 18:57:48.002169   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.002457   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:57:38 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:57:48.002479   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.002552   47919 provision.go:138] copyHostCerts
	I0229 18:57:48.002600   47919 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem, removing ...
	I0229 18:57:48.002623   47919 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem
	I0229 18:57:48.002690   47919 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem (1082 bytes)
	I0229 18:57:48.002805   47919 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem, removing ...
	I0229 18:57:48.002820   47919 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem
	I0229 18:57:48.002854   47919 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem (1123 bytes)
	I0229 18:57:48.002936   47919 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem, removing ...
	I0229 18:57:48.002946   47919 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem
	I0229 18:57:48.002965   47919 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem (1675 bytes)
	I0229 18:57:48.003030   47919 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-631080 san=[192.168.83.214 192.168.83.214 localhost 127.0.0.1 minikube old-k8s-version-631080]
	I0229 18:57:48.095543   47919 provision.go:172] copyRemoteCerts
	I0229 18:57:48.095594   47919 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 18:57:48.095617   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHHostname
	I0229 18:57:48.098167   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.098411   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:57:38 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:57:48.098439   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.098593   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHPort
	I0229 18:57:48.098770   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:57:48.098910   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHUsername
	I0229 18:57:48.099046   47919 sshutil.go:53] new ssh client: &{IP:192.168.83.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/old-k8s-version-631080/id_rsa Username:docker}
	I0229 18:57:48.178774   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 18:57:48.204896   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0229 18:57:48.234660   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 18:57:48.264189   47919 provision.go:86] duration metric: configureAuth took 267.20486ms
	I0229 18:57:48.264215   47919 buildroot.go:189] setting minikube options for container-runtime
	I0229 18:57:48.264391   47919 config.go:182] Loaded profile config "old-k8s-version-631080": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0229 18:57:48.264464   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHHostname
	I0229 18:57:48.267066   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.267464   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:57:38 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:57:48.267500   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.267721   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHPort
	I0229 18:57:48.267913   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:57:48.268105   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:57:48.268260   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHUsername
	I0229 18:57:48.268425   47919 main.go:141] libmachine: Using SSH client type: native
	I0229 18:57:48.268630   47919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.83.214 22 <nil> <nil>}
	I0229 18:57:48.268658   47919 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0229 18:57:48.560376   47919 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0229 18:57:48.560401   47919 machine.go:91] provisioned docker machine in 804.837627ms
	I0229 18:57:48.560414   47919 start.go:300] post-start starting for "old-k8s-version-631080" (driver="kvm2")
	I0229 18:57:48.560426   47919 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 18:57:48.560450   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .DriverName
	I0229 18:57:48.560751   47919 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 18:57:48.560776   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHHostname
	I0229 18:57:48.563312   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.563638   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:57:38 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:57:48.563670   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.563776   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHPort
	I0229 18:57:48.563971   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:57:48.564126   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHUsername
	I0229 18:57:48.564295   47919 sshutil.go:53] new ssh client: &{IP:192.168.83.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/old-k8s-version-631080/id_rsa Username:docker}
	I0229 18:57:48.646996   47919 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 18:57:48.652329   47919 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 18:57:48.652356   47919 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6428/.minikube/addons for local assets ...
	I0229 18:57:48.652428   47919 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6428/.minikube/files for local assets ...
	I0229 18:57:48.652538   47919 filesync.go:149] local asset: /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem -> 136512.pem in /etc/ssl/certs
	I0229 18:57:48.652665   47919 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 18:57:48.663432   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem --> /etc/ssl/certs/136512.pem (1708 bytes)
	I0229 18:57:48.694980   47919 start.go:303] post-start completed in 134.554808ms
	I0229 18:57:48.695000   47919 fix.go:56] fixHost completed within 22.230801566s
	I0229 18:57:48.695033   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHHostname
	I0229 18:57:48.697788   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.698205   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:57:38 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:57:48.698231   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.698416   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHPort
	I0229 18:57:48.698633   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:57:48.698797   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:57:48.698941   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHUsername
	I0229 18:57:48.699118   47919 main.go:141] libmachine: Using SSH client type: native
	I0229 18:57:48.699327   47919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.83.214 22 <nil> <nil>}
	I0229 18:57:48.699349   47919 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 18:57:48.808714   47919 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709233068.793225740
	
	I0229 18:57:48.808740   47919 fix.go:206] guest clock: 1709233068.793225740
	I0229 18:57:48.808751   47919 fix.go:219] Guest: 2024-02-29 18:57:48.79322574 +0000 UTC Remote: 2024-02-29 18:57:48.695003912 +0000 UTC m=+273.807414604 (delta=98.221828ms)
	I0229 18:57:48.808793   47919 fix.go:190] guest clock delta is within tolerance: 98.221828ms
	I0229 18:57:48.808800   47919 start.go:83] releasing machines lock for "old-k8s-version-631080", held for 22.344627122s
	I0229 18:57:48.808832   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .DriverName
	I0229 18:57:48.809114   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetIP
	I0229 18:57:48.811872   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.812297   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:57:38 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:57:48.812336   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.812522   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .DriverName
	I0229 18:57:48.813072   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .DriverName
	I0229 18:57:48.813270   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .DriverName
	I0229 18:57:48.813347   47919 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 18:57:48.813392   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHHostname
	I0229 18:57:48.813509   47919 ssh_runner.go:195] Run: cat /version.json
	I0229 18:57:48.813536   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHHostname
	I0229 18:57:48.816200   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.816580   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:57:38 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:57:48.816607   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.816676   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.816753   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHPort
	I0229 18:57:48.816939   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:57:48.817097   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHUsername
	I0229 18:57:48.817244   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:57:38 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:57:48.817268   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:48.817293   47919 sshutil.go:53] new ssh client: &{IP:192.168.83.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/old-k8s-version-631080/id_rsa Username:docker}
	I0229 18:57:48.817420   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHPort
	I0229 18:57:48.817538   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHKeyPath
	I0229 18:57:48.817643   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetSSHUsername
	I0229 18:57:48.817769   47919 sshutil.go:53] new ssh client: &{IP:192.168.83.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/old-k8s-version-631080/id_rsa Username:docker}
	I0229 18:57:48.919708   47919 ssh_runner.go:195] Run: systemctl --version
	I0229 18:57:48.926381   47919 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0229 18:57:49.086263   47919 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 18:57:49.093350   47919 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 18:57:49.093427   47919 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 18:57:49.112686   47919 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 18:57:49.112716   47919 start.go:475] detecting cgroup driver to use...
	I0229 18:57:49.112784   47919 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 18:57:49.135232   47919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 18:57:49.152937   47919 docker.go:217] disabling cri-docker service (if available) ...
	I0229 18:57:49.152992   47919 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 18:57:49.172048   47919 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 18:57:49.190450   47919 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 18:57:49.341605   47919 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 18:57:49.539663   47919 docker.go:233] disabling docker service ...
	I0229 18:57:49.539733   47919 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 18:57:49.562110   47919 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 18:57:49.578761   47919 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 18:57:49.739044   47919 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 18:57:49.897866   47919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 18:57:49.918783   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 18:57:45.613998   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:57:46.114525   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:57:46.146283   47608 api_server.go:72] duration metric: took 1.032950423s to wait for apiserver process to appear ...
	I0229 18:57:46.146327   47608 api_server.go:88] waiting for apiserver healthz status ...
	I0229 18:57:46.146344   47608 api_server.go:253] Checking apiserver healthz at https://192.168.61.34:8443/healthz ...
	I0229 18:57:46.146876   47608 api_server.go:269] stopped: https://192.168.61.34:8443/healthz: Get "https://192.168.61.34:8443/healthz": dial tcp 192.168.61.34:8443: connect: connection refused
	I0229 18:57:46.646633   47608 api_server.go:253] Checking apiserver healthz at https://192.168.61.34:8443/healthz ...
	I0229 18:57:49.751381   47608 api_server.go:279] https://192.168.61.34:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 18:57:49.751410   47608 api_server.go:103] status: https://192.168.61.34:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 18:57:49.751427   47608 api_server.go:253] Checking apiserver healthz at https://192.168.61.34:8443/healthz ...
	I0229 18:57:49.791602   47608 api_server.go:279] https://192.168.61.34:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 18:57:49.791634   47608 api_server.go:103] status: https://192.168.61.34:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 18:57:50.147094   47608 api_server.go:253] Checking apiserver healthz at https://192.168.61.34:8443/healthz ...
	I0229 18:57:50.153644   47608 api_server.go:279] https://192.168.61.34:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 18:57:50.153671   47608 api_server.go:103] status: https://192.168.61.34:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 18:57:49.941241   47919 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0229 18:57:49.941328   47919 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:57:49.953131   47919 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0229 18:57:49.953215   47919 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:57:49.964850   47919 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:57:49.976035   47919 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:57:49.988017   47919 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 18:57:50.000990   47919 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 18:57:50.019124   47919 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 18:57:50.019177   47919 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0229 18:57:50.042881   47919 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 18:57:50.054219   47919 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:57:50.213793   47919 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0229 18:57:50.387473   47919 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0229 18:57:50.387536   47919 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0229 18:57:50.395113   47919 start.go:543] Will wait 60s for crictl version
	I0229 18:57:50.395177   47919 ssh_runner.go:195] Run: which crictl
	I0229 18:57:50.400166   47919 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 18:57:50.446910   47919 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0229 18:57:50.447015   47919 ssh_runner.go:195] Run: crio --version
	I0229 18:57:50.486139   47919 ssh_runner.go:195] Run: crio --version
	I0229 18:57:50.528290   47919 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.29.1 ...
	I0229 18:57:50.646967   47608 api_server.go:253] Checking apiserver healthz at https://192.168.61.34:8443/healthz ...
	I0229 18:57:50.660388   47608 api_server.go:279] https://192.168.61.34:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 18:57:50.660420   47608 api_server.go:103] status: https://192.168.61.34:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 18:57:51.146674   47608 api_server.go:253] Checking apiserver healthz at https://192.168.61.34:8443/healthz ...
	I0229 18:57:51.155154   47608 api_server.go:279] https://192.168.61.34:8443/healthz returned 200:
	ok
	I0229 18:57:51.166220   47608 api_server.go:141] control plane version: v1.28.4
	I0229 18:57:51.166255   47608 api_server.go:131] duration metric: took 5.019919259s to wait for apiserver health ...
	I0229 18:57:51.166267   47608 cni.go:84] Creating CNI manager for ""
	I0229 18:57:51.166277   47608 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 18:57:51.168259   47608 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 18:57:50.148417   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting to get IP...
	I0229 18:57:50.149211   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:57:50.149601   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | unable to find current IP address of domain default-k8s-diff-port-153528 in network mk-default-k8s-diff-port-153528
	I0229 18:57:50.149661   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | I0229 18:57:50.149584   48864 retry.go:31] will retry after 287.925969ms: waiting for machine to come up
	I0229 18:57:50.439389   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:57:50.440003   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | unable to find current IP address of domain default-k8s-diff-port-153528 in network mk-default-k8s-diff-port-153528
	I0229 18:57:50.440033   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | I0229 18:57:50.439944   48864 retry.go:31] will retry after 341.540721ms: waiting for machine to come up
	I0229 18:57:50.783988   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:57:50.784594   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | unable to find current IP address of domain default-k8s-diff-port-153528 in network mk-default-k8s-diff-port-153528
	I0229 18:57:50.784622   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | I0229 18:57:50.784544   48864 retry.go:31] will retry after 344.053696ms: waiting for machine to come up
	I0229 18:57:51.130288   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:57:51.131126   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | unable to find current IP address of domain default-k8s-diff-port-153528 in network mk-default-k8s-diff-port-153528
	I0229 18:57:51.131152   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | I0229 18:57:51.131075   48864 retry.go:31] will retry after 593.843769ms: waiting for machine to come up
	I0229 18:57:51.726464   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:57:51.726974   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | unable to find current IP address of domain default-k8s-diff-port-153528 in network mk-default-k8s-diff-port-153528
	I0229 18:57:51.727000   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | I0229 18:57:51.726879   48864 retry.go:31] will retry after 689.199247ms: waiting for machine to come up
	I0229 18:57:52.418297   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:57:52.418801   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | unable to find current IP address of domain default-k8s-diff-port-153528 in network mk-default-k8s-diff-port-153528
	I0229 18:57:52.418829   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | I0229 18:57:52.418753   48864 retry.go:31] will retry after 737.671716ms: waiting for machine to come up
	I0229 18:57:53.158161   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:57:53.158573   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | unable to find current IP address of domain default-k8s-diff-port-153528 in network mk-default-k8s-diff-port-153528
	I0229 18:57:53.158618   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | I0229 18:57:53.158521   48864 retry.go:31] will retry after 1.18162067s: waiting for machine to come up
	I0229 18:57:50.530077   47919 main.go:141] libmachine: (old-k8s-version-631080) Calling .GetIP
	I0229 18:57:50.533389   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:50.533761   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:b2:7e", ip: ""} in network mk-old-k8s-version-631080: {Iface:virbr4 ExpiryTime:2024-02-29 19:57:38 +0000 UTC Type:0 Mac:52:54:00:1b:b2:7e Iaid: IPaddr:192.168.83.214 Prefix:24 Hostname:old-k8s-version-631080 Clientid:01:52:54:00:1b:b2:7e}
	I0229 18:57:50.533794   47919 main.go:141] libmachine: (old-k8s-version-631080) DBG | domain old-k8s-version-631080 has defined IP address 192.168.83.214 and MAC address 52:54:00:1b:b2:7e in network mk-old-k8s-version-631080
	I0229 18:57:50.534001   47919 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0229 18:57:50.538857   47919 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:57:50.556961   47919 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0229 18:57:50.557028   47919 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 18:57:50.616925   47919 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0229 18:57:50.617001   47919 ssh_runner.go:195] Run: which lz4
	I0229 18:57:50.622857   47919 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0229 18:57:50.628035   47919 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 18:57:50.628070   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0229 18:57:52.679575   47919 crio.go:444] Took 2.056751 seconds to copy over tarball
	I0229 18:57:52.679656   47919 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 18:57:51.169655   47608 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 18:57:51.184521   47608 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 18:57:51.215791   47608 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 18:57:51.235050   47608 system_pods.go:59] 8 kube-system pods found
	I0229 18:57:51.235136   47608 system_pods.go:61] "coredns-5dd5756b68-6b5pm" [d8023f3b-fc07-4dd4-98dc-bd321d137a06] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 18:57:51.235150   47608 system_pods.go:61] "etcd-embed-certs-991128" [01a1ee82-a650-4736-8fb9-e983427bef96] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0229 18:57:51.235161   47608 system_pods.go:61] "kube-apiserver-embed-certs-991128" [a6810e01-a958-4e7b-ba0f-6cd2e747b998] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0229 18:57:51.235170   47608 system_pods.go:61] "kube-controller-manager-embed-certs-991128" [6469e9c8-7372-4756-926d-0de600c8ed4f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0229 18:57:51.235179   47608 system_pods.go:61] "kube-proxy-zd7rf" [963b5fb6-f287-4c80-a324-b0cb09b1ae97] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0229 18:57:51.235190   47608 system_pods.go:61] "kube-scheduler-embed-certs-991128" [ac2e7c83-6e96-46ba-aeed-c847d312ba4c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0229 18:57:51.235199   47608 system_pods.go:61] "metrics-server-57f55c9bc5-5w6c9" [6ddb9b39-e1d1-4d34-bb45-e9a5c161f13d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 18:57:51.235220   47608 system_pods.go:61] "storage-provisioner" [99d0cbe5-bb8b-472b-be91-9f29442c8c1d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0229 18:57:51.235243   47608 system_pods.go:74] duration metric: took 19.430245ms to wait for pod list to return data ...
	I0229 18:57:51.235257   47608 node_conditions.go:102] verifying NodePressure condition ...
	I0229 18:57:51.241823   47608 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 18:57:51.241849   47608 node_conditions.go:123] node cpu capacity is 2
	I0229 18:57:51.241863   47608 node_conditions.go:105] duration metric: took 6.600606ms to run NodePressure ...
	I0229 18:57:51.241884   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:57:51.654038   47608 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0229 18:57:51.663120   47608 kubeadm.go:787] kubelet initialised
	I0229 18:57:51.663146   47608 kubeadm.go:788] duration metric: took 9.079921ms waiting for restarted kubelet to initialise ...
	I0229 18:57:51.663156   47608 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 18:57:51.671417   47608 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-6b5pm" in "kube-system" namespace to be "Ready" ...
	I0229 18:57:53.679921   47608 pod_ready.go:102] pod "coredns-5dd5756b68-6b5pm" in "kube-system" namespace has status "Ready":"False"
	I0229 18:57:54.342488   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:57:54.342981   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | unable to find current IP address of domain default-k8s-diff-port-153528 in network mk-default-k8s-diff-port-153528
	I0229 18:57:54.343006   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | I0229 18:57:54.342931   48864 retry.go:31] will retry after 1.180730966s: waiting for machine to come up
	I0229 18:57:55.524954   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:57:55.525398   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | unable to find current IP address of domain default-k8s-diff-port-153528 in network mk-default-k8s-diff-port-153528
	I0229 18:57:55.525427   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | I0229 18:57:55.525338   48864 retry.go:31] will retry after 1.706902899s: waiting for machine to come up
	I0229 18:57:57.233340   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:57:57.233834   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | unable to find current IP address of domain default-k8s-diff-port-153528 in network mk-default-k8s-diff-port-153528
	I0229 18:57:57.233862   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | I0229 18:57:57.233791   48864 retry.go:31] will retry after 2.281506267s: waiting for machine to come up
	I0229 18:57:55.661321   47919 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.981628592s)
	I0229 18:57:55.661351   47919 crio.go:451] Took 2.981744 seconds to extract the tarball
	I0229 18:57:55.661363   47919 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 18:57:55.708924   47919 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 18:57:55.751627   47919 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0229 18:57:55.751650   47919 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0229 18:57:55.751726   47919 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:57:55.751752   47919 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 18:57:55.751758   47919 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0229 18:57:55.751735   47919 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 18:57:55.751751   47919 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0229 18:57:55.751772   47919 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 18:57:55.751864   47919 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0229 18:57:55.752153   47919 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 18:57:55.753139   47919 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0229 18:57:55.753456   47919 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:57:55.753467   47919 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 18:57:55.753476   47919 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 18:57:55.753476   47919 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 18:57:55.753476   47919 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0229 18:57:55.753486   47919 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0229 18:57:55.753547   47919 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 18:57:55.934620   47919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0229 18:57:55.988723   47919 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0229 18:57:55.988767   47919 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0229 18:57:55.988811   47919 ssh_runner.go:195] Run: which crictl
	I0229 18:57:55.993750   47919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0229 18:57:56.036192   47919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0229 18:57:56.037872   47919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0229 18:57:56.038123   47919 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0229 18:57:56.040846   47919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0229 18:57:56.046242   47919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0229 18:57:56.065126   47919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 18:57:56.077683   47919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0229 18:57:56.126642   47919 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0229 18:57:56.126683   47919 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 18:57:56.126741   47919 ssh_runner.go:195] Run: which crictl
	I0229 18:57:56.191928   47919 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0229 18:57:56.191980   47919 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 18:57:56.192006   47919 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0229 18:57:56.192037   47919 ssh_runner.go:195] Run: which crictl
	I0229 18:57:56.192045   47919 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0229 18:57:56.192086   47919 ssh_runner.go:195] Run: which crictl
	I0229 18:57:56.203773   47919 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0229 18:57:56.203819   47919 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 18:57:56.203863   47919 ssh_runner.go:195] Run: which crictl
	I0229 18:57:56.227761   47919 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0229 18:57:56.227799   47919 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 18:57:56.227832   47919 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0229 18:57:56.227856   47919 ssh_runner.go:195] Run: which crictl
	I0229 18:57:56.227864   47919 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0229 18:57:56.227876   47919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0229 18:57:56.227922   47919 ssh_runner.go:195] Run: which crictl
	I0229 18:57:56.227925   47919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0229 18:57:56.227956   47919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0229 18:57:56.227961   47919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0229 18:57:56.246645   47919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0229 18:57:56.344012   47919 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0229 18:57:56.344125   47919 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0229 18:57:56.346352   47919 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0229 18:57:56.361309   47919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 18:57:56.361484   47919 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0229 18:57:56.383942   47919 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0229 18:57:56.411697   47919 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0229 18:57:56.649625   47919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:57:56.801430   47919 cache_images.go:92] LoadImages completed in 1.049765957s
	W0229 18:57:56.801578   47919 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0: no such file or directory
	I0229 18:57:56.801670   47919 ssh_runner.go:195] Run: crio config
	I0229 18:57:56.872210   47919 cni.go:84] Creating CNI manager for ""
	I0229 18:57:56.872238   47919 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 18:57:56.872260   47919 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 18:57:56.872283   47919 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.214 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-631080 NodeName:old-k8s-version-631080 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.214"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.214 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0229 18:57:56.872458   47919 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.214
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-631080"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.214
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.214"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-631080
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.83.214:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 18:57:56.872545   47919 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-631080 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.83.214
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-631080 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 18:57:56.872620   47919 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0229 18:57:56.884571   47919 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 18:57:56.884647   47919 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 18:57:56.896167   47919 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0229 18:57:56.916824   47919 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 18:57:56.938739   47919 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I0229 18:57:56.961411   47919 ssh_runner.go:195] Run: grep 192.168.83.214	control-plane.minikube.internal$ /etc/hosts
	I0229 18:57:56.966134   47919 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.214	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:57:56.981089   47919 certs.go:56] Setting up /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080 for IP: 192.168.83.214
	I0229 18:57:56.981121   47919 certs.go:190] acquiring lock for shared ca certs: {Name:mk3505d2468da66ac20dc3ebf913782cecf1ba0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:57:56.981295   47919 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18259-6428/.minikube/ca.key
	I0229 18:57:56.981358   47919 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.key
	I0229 18:57:56.981465   47919 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/client.key
	I0229 18:57:56.981533   47919 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/apiserver.key.89a58109
	I0229 18:57:56.981586   47919 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/proxy-client.key
	I0229 18:57:56.981755   47919 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651.pem (1338 bytes)
	W0229 18:57:56.981791   47919 certs.go:433] ignoring /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651_empty.pem, impossibly tiny 0 bytes
	I0229 18:57:56.981806   47919 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem (1675 bytes)
	I0229 18:57:56.981845   47919 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem (1082 bytes)
	I0229 18:57:56.981878   47919 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem (1123 bytes)
	I0229 18:57:56.981910   47919 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem (1675 bytes)
	I0229 18:57:56.981961   47919 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem (1708 bytes)
	I0229 18:57:56.982889   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 18:57:57.015587   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0229 18:57:57.048698   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 18:57:57.078634   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 18:57:57.114008   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 18:57:57.146884   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0229 18:57:57.179560   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 18:57:57.211989   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0229 18:57:57.246936   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651.pem --> /usr/share/ca-certificates/13651.pem (1338 bytes)
	I0229 18:57:57.280651   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem --> /usr/share/ca-certificates/136512.pem (1708 bytes)
	I0229 18:57:57.310050   47919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 18:57:57.337439   47919 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 18:57:57.359100   47919 ssh_runner.go:195] Run: openssl version
	I0229 18:57:57.366111   47919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13651.pem && ln -fs /usr/share/ca-certificates/13651.pem /etc/ssl/certs/13651.pem"
	I0229 18:57:57.380593   47919 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13651.pem
	I0229 18:57:57.386703   47919 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 17:49 /usr/share/ca-certificates/13651.pem
	I0229 18:57:57.386771   47919 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13651.pem
	I0229 18:57:57.401429   47919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13651.pem /etc/ssl/certs/51391683.0"
	I0229 18:57:57.416516   47919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136512.pem && ln -fs /usr/share/ca-certificates/136512.pem /etc/ssl/certs/136512.pem"
	I0229 18:57:57.429645   47919 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136512.pem
	I0229 18:57:57.434960   47919 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 17:49 /usr/share/ca-certificates/136512.pem
	I0229 18:57:57.435010   47919 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136512.pem
	I0229 18:57:57.441855   47919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136512.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 18:57:57.457277   47919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 18:57:57.471345   47919 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:57:57.476556   47919 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 17:40 /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:57:57.476629   47919 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:57:57.483318   47919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 18:57:57.496355   47919 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 18:57:57.501976   47919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0229 18:57:57.509611   47919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0229 18:57:57.516861   47919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0229 18:57:57.523819   47919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0229 18:57:57.530959   47919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0229 18:57:57.539788   47919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0229 18:57:57.548575   47919 kubeadm.go:404] StartCluster: {Name:old-k8s-version-631080 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-631080 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.83.214 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:57:57.548663   47919 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0229 18:57:57.548731   47919 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 18:57:57.596234   47919 cri.go:89] found id: ""
	I0229 18:57:57.596327   47919 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 18:57:57.612827   47919 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0229 18:57:57.612856   47919 kubeadm.go:636] restartCluster start
	I0229 18:57:57.612903   47919 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0229 18:57:57.627565   47919 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:57.629049   47919 kubeconfig.go:135] verify returned: extract IP: "old-k8s-version-631080" does not appear in /home/jenkins/minikube-integration/18259-6428/kubeconfig
	I0229 18:57:57.630139   47919 kubeconfig.go:146] "old-k8s-version-631080" context is missing from /home/jenkins/minikube-integration/18259-6428/kubeconfig - will repair!
	I0229 18:57:57.631735   47919 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/kubeconfig: {Name:mk7125f243525b7f0feb85371d9c568ed8e0cf7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:57:57.634318   47919 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0229 18:57:57.648383   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:57:57.648458   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:57.663708   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:58.149010   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:57:58.149086   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:58.164430   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:58.649075   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:57:58.649186   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:58.663768   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:59.149370   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:57:59.149450   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:59.165089   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:59.648609   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:57:59.648690   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:57:59.665224   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:57:56.182137   47608 pod_ready.go:102] pod "coredns-5dd5756b68-6b5pm" in "kube-system" namespace has status "Ready":"False"
	I0229 18:57:58.681550   47608 pod_ready.go:102] pod "coredns-5dd5756b68-6b5pm" in "kube-system" namespace has status "Ready":"False"
	I0229 18:57:59.517428   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:57:59.518040   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | unable to find current IP address of domain default-k8s-diff-port-153528 in network mk-default-k8s-diff-port-153528
	I0229 18:57:59.518069   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | I0229 18:57:59.517984   48864 retry.go:31] will retry after 2.738727804s: waiting for machine to come up
	I0229 18:58:02.258042   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:02.258540   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | unable to find current IP address of domain default-k8s-diff-port-153528 in network mk-default-k8s-diff-port-153528
	I0229 18:58:02.258569   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | I0229 18:58:02.258498   48864 retry.go:31] will retry after 2.520892118s: waiting for machine to come up
	I0229 18:58:00.148880   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:00.148969   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:00.168561   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:00.649227   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:00.649308   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:00.668162   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:01.148539   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:01.148600   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:01.168347   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:01.649392   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:01.649484   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:01.663974   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:02.149462   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:02.149548   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:02.164757   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:02.649398   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:02.649522   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:02.664014   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:03.148502   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:03.148718   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:03.165374   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:03.648528   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:03.648594   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:03.663305   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:04.148760   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:04.148847   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:04.163480   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:04.649122   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:04.649226   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:04.663556   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:01.179941   47608 pod_ready.go:102] pod "coredns-5dd5756b68-6b5pm" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:03.679523   47608 pod_ready.go:102] pod "coredns-5dd5756b68-6b5pm" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:04.179171   47608 pod_ready.go:92] pod "coredns-5dd5756b68-6b5pm" in "kube-system" namespace has status "Ready":"True"
	I0229 18:58:04.179198   47608 pod_ready.go:81] duration metric: took 12.507755709s waiting for pod "coredns-5dd5756b68-6b5pm" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:04.179212   47608 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-991128" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:04.184638   47608 pod_ready.go:92] pod "etcd-embed-certs-991128" in "kube-system" namespace has status "Ready":"True"
	I0229 18:58:04.184657   47608 pod_ready.go:81] duration metric: took 5.438559ms waiting for pod "etcd-embed-certs-991128" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:04.184665   47608 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-991128" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:04.189119   47608 pod_ready.go:92] pod "kube-apiserver-embed-certs-991128" in "kube-system" namespace has status "Ready":"True"
	I0229 18:58:04.189139   47608 pod_ready.go:81] duration metric: took 4.467998ms waiting for pod "kube-apiserver-embed-certs-991128" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:04.189147   47608 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-991128" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:04.193800   47608 pod_ready.go:92] pod "kube-controller-manager-embed-certs-991128" in "kube-system" namespace has status "Ready":"True"
	I0229 18:58:04.193819   47608 pod_ready.go:81] duration metric: took 4.66771ms waiting for pod "kube-controller-manager-embed-certs-991128" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:04.193827   47608 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zd7rf" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:04.198220   47608 pod_ready.go:92] pod "kube-proxy-zd7rf" in "kube-system" namespace has status "Ready":"True"
	I0229 18:58:04.198239   47608 pod_ready.go:81] duration metric: took 4.405824ms waiting for pod "kube-proxy-zd7rf" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:04.198246   47608 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-991128" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:04.575846   47608 pod_ready.go:92] pod "kube-scheduler-embed-certs-991128" in "kube-system" namespace has status "Ready":"True"
	I0229 18:58:04.575869   47608 pod_ready.go:81] duration metric: took 377.617228ms waiting for pod "kube-scheduler-embed-certs-991128" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:04.575878   47608 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:04.780871   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:04.781307   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | unable to find current IP address of domain default-k8s-diff-port-153528 in network mk-default-k8s-diff-port-153528
	I0229 18:58:04.781334   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | I0229 18:58:04.781266   48864 retry.go:31] will retry after 3.73331916s: waiting for machine to come up
	I0229 18:58:08.519173   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:08.519646   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Found IP for machine: 192.168.39.210
	I0229 18:58:08.519666   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Reserving static IP address...
	I0229 18:58:08.519687   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has current primary IP address 192.168.39.210 and MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:08.520011   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-153528", mac: "52:54:00:78:ec:2b", ip: "192.168.39.210"} in network mk-default-k8s-diff-port-153528: {Iface:virbr3 ExpiryTime:2024-02-29 19:58:02 +0000 UTC Type:0 Mac:52:54:00:78:ec:2b Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:default-k8s-diff-port-153528 Clientid:01:52:54:00:78:ec:2b}
	I0229 18:58:08.520032   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Reserved static IP address: 192.168.39.210
	I0229 18:58:08.520046   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | skip adding static IP to network mk-default-k8s-diff-port-153528 - found existing host DHCP lease matching {name: "default-k8s-diff-port-153528", mac: "52:54:00:78:ec:2b", ip: "192.168.39.210"}
	I0229 18:58:08.520057   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | Getting to WaitForSSH function...
	I0229 18:58:08.520067   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Waiting for SSH to be available...
	I0229 18:58:08.522047   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:08.522377   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ec:2b", ip: ""} in network mk-default-k8s-diff-port-153528: {Iface:virbr3 ExpiryTime:2024-02-29 19:58:02 +0000 UTC Type:0 Mac:52:54:00:78:ec:2b Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:default-k8s-diff-port-153528 Clientid:01:52:54:00:78:ec:2b}
	I0229 18:58:08.522411   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined IP address 192.168.39.210 and MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:08.522529   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | Using SSH client type: external
	I0229 18:58:08.522555   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | Using SSH private key: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/default-k8s-diff-port-153528/id_rsa (-rw-------)
	I0229 18:58:08.522592   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.210 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18259-6428/.minikube/machines/default-k8s-diff-port-153528/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 18:58:08.522606   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | About to run SSH command:
	I0229 18:58:08.522616   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | exit 0
	I0229 18:58:08.651113   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | SSH cmd err, output: <nil>: 
	I0229 18:58:08.651447   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetConfigRaw
	I0229 18:58:08.652078   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetIP
	I0229 18:58:08.654739   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:08.655191   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ec:2b", ip: ""} in network mk-default-k8s-diff-port-153528: {Iface:virbr3 ExpiryTime:2024-02-29 19:58:02 +0000 UTC Type:0 Mac:52:54:00:78:ec:2b Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:default-k8s-diff-port-153528 Clientid:01:52:54:00:78:ec:2b}
	I0229 18:58:08.655222   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined IP address 192.168.39.210 and MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:08.655516   48088 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/default-k8s-diff-port-153528/config.json ...
	I0229 18:58:08.655758   48088 machine.go:88] provisioning docker machine ...
	I0229 18:58:08.655787   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .DriverName
	I0229 18:58:08.655976   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetMachineName
	I0229 18:58:08.656109   48088 buildroot.go:166] provisioning hostname "default-k8s-diff-port-153528"
	I0229 18:58:08.656127   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetMachineName
	I0229 18:58:08.656273   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHHostname
	I0229 18:58:08.658580   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:08.658933   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ec:2b", ip: ""} in network mk-default-k8s-diff-port-153528: {Iface:virbr3 ExpiryTime:2024-02-29 19:58:02 +0000 UTC Type:0 Mac:52:54:00:78:ec:2b Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:default-k8s-diff-port-153528 Clientid:01:52:54:00:78:ec:2b}
	I0229 18:58:08.658961   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined IP address 192.168.39.210 and MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:08.659066   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHPort
	I0229 18:58:08.659255   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHKeyPath
	I0229 18:58:08.659419   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHKeyPath
	I0229 18:58:08.659547   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHUsername
	I0229 18:58:08.659714   48088 main.go:141] libmachine: Using SSH client type: native
	I0229 18:58:08.659933   48088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0229 18:58:08.659952   48088 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-153528 && echo "default-k8s-diff-port-153528" | sudo tee /etc/hostname
	I0229 18:58:08.782704   48088 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-153528
	
	I0229 18:58:08.782727   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHHostname
	I0229 18:58:08.785588   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:08.785939   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ec:2b", ip: ""} in network mk-default-k8s-diff-port-153528: {Iface:virbr3 ExpiryTime:2024-02-29 19:58:02 +0000 UTC Type:0 Mac:52:54:00:78:ec:2b Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:default-k8s-diff-port-153528 Clientid:01:52:54:00:78:ec:2b}
	I0229 18:58:08.785967   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined IP address 192.168.39.210 and MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:08.786107   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHPort
	I0229 18:58:08.786290   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHKeyPath
	I0229 18:58:08.786430   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHKeyPath
	I0229 18:58:08.786550   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHUsername
	I0229 18:58:08.786675   48088 main.go:141] libmachine: Using SSH client type: native
	I0229 18:58:08.786910   48088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0229 18:58:08.786937   48088 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-153528' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-153528/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-153528' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 18:58:08.906593   48088 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 18:58:08.906630   48088 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18259-6428/.minikube CaCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18259-6428/.minikube}
	I0229 18:58:08.906671   48088 buildroot.go:174] setting up certificates
	I0229 18:58:08.906683   48088 provision.go:83] configureAuth start
	I0229 18:58:08.906700   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetMachineName
	I0229 18:58:08.906992   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetIP
	I0229 18:58:08.909897   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:08.910266   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ec:2b", ip: ""} in network mk-default-k8s-diff-port-153528: {Iface:virbr3 ExpiryTime:2024-02-29 19:58:02 +0000 UTC Type:0 Mac:52:54:00:78:ec:2b Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:default-k8s-diff-port-153528 Clientid:01:52:54:00:78:ec:2b}
	I0229 18:58:08.910299   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined IP address 192.168.39.210 and MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:08.910420   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHHostname
	I0229 18:58:08.912899   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:08.913333   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ec:2b", ip: ""} in network mk-default-k8s-diff-port-153528: {Iface:virbr3 ExpiryTime:2024-02-29 19:58:02 +0000 UTC Type:0 Mac:52:54:00:78:ec:2b Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:default-k8s-diff-port-153528 Clientid:01:52:54:00:78:ec:2b}
	I0229 18:58:08.913363   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined IP address 192.168.39.210 and MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:08.913526   48088 provision.go:138] copyHostCerts
	I0229 18:58:08.913589   48088 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem, removing ...
	I0229 18:58:08.913602   48088 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem
	I0229 18:58:08.913671   48088 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem (1082 bytes)
	I0229 18:58:08.913796   48088 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem, removing ...
	I0229 18:58:08.913808   48088 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem
	I0229 18:58:08.913838   48088 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem (1123 bytes)
	I0229 18:58:08.913920   48088 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem, removing ...
	I0229 18:58:08.913940   48088 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem
	I0229 18:58:08.913969   48088 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem (1675 bytes)
	I0229 18:58:08.914052   48088 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-153528 san=[192.168.39.210 192.168.39.210 localhost 127.0.0.1 minikube default-k8s-diff-port-153528]
	I0229 18:58:09.033009   48088 provision.go:172] copyRemoteCerts
	I0229 18:58:09.033064   48088 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 18:58:09.033087   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHHostname
	I0229 18:58:09.035647   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:09.036023   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ec:2b", ip: ""} in network mk-default-k8s-diff-port-153528: {Iface:virbr3 ExpiryTime:2024-02-29 19:58:02 +0000 UTC Type:0 Mac:52:54:00:78:ec:2b Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:default-k8s-diff-port-153528 Clientid:01:52:54:00:78:ec:2b}
	I0229 18:58:09.036061   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined IP address 192.168.39.210 and MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:09.036262   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHPort
	I0229 18:58:09.036434   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHKeyPath
	I0229 18:58:09.036582   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHUsername
	I0229 18:58:09.036685   48088 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/default-k8s-diff-port-153528/id_rsa Username:docker}
	I0229 18:58:09.127168   48088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 18:58:09.162113   48088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0229 18:58:09.191657   48088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0229 18:58:09.224555   48088 provision.go:86] duration metric: configureAuth took 317.8564ms
	I0229 18:58:09.224589   48088 buildroot.go:189] setting minikube options for container-runtime
	I0229 18:58:09.224789   48088 config.go:182] Loaded profile config "default-k8s-diff-port-153528": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 18:58:09.224877   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHHostname
	I0229 18:58:09.227193   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:09.227549   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ec:2b", ip: ""} in network mk-default-k8s-diff-port-153528: {Iface:virbr3 ExpiryTime:2024-02-29 19:58:02 +0000 UTC Type:0 Mac:52:54:00:78:ec:2b Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:default-k8s-diff-port-153528 Clientid:01:52:54:00:78:ec:2b}
	I0229 18:58:09.227577   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined IP address 192.168.39.210 and MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:09.227731   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHPort
	I0229 18:58:09.227950   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHKeyPath
	I0229 18:58:09.228111   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHKeyPath
	I0229 18:58:09.228266   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHUsername
	I0229 18:58:09.228398   48088 main.go:141] libmachine: Using SSH client type: native
	I0229 18:58:09.228595   48088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0229 18:58:09.228617   48088 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0229 18:58:09.760261   47515 start.go:369] acquired machines lock for "no-preload-247197" in 59.368392801s
	I0229 18:58:09.760316   47515 start.go:96] Skipping create...Using existing machine configuration
	I0229 18:58:09.760326   47515 fix.go:54] fixHost starting: 
	I0229 18:58:09.760731   47515 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 18:58:09.760768   47515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:58:09.777304   47515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45123
	I0229 18:58:09.777781   47515 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:58:09.778277   47515 main.go:141] libmachine: Using API Version  1
	I0229 18:58:09.778301   47515 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:58:09.778655   47515 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:58:09.778829   47515 main.go:141] libmachine: (no-preload-247197) Calling .DriverName
	I0229 18:58:09.779012   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetState
	I0229 18:58:09.780644   47515 fix.go:102] recreateIfNeeded on no-preload-247197: state=Stopped err=<nil>
	I0229 18:58:09.780670   47515 main.go:141] libmachine: (no-preload-247197) Calling .DriverName
	W0229 18:58:09.780844   47515 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 18:58:09.782653   47515 out.go:177] * Restarting existing kvm2 VM for "no-preload-247197" ...
	I0229 18:58:05.149421   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:05.149514   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:05.164236   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:05.648767   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:05.648856   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:05.664890   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:06.148979   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:06.149069   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:06.165186   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:06.649135   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:06.649245   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:06.665357   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:07.148896   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:07.148978   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:07.163358   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:07.649238   47919 api_server.go:166] Checking apiserver status ...
	I0229 18:58:07.649309   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:07.665329   47919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:07.665359   47919 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0229 18:58:07.665368   47919 kubeadm.go:1135] stopping kube-system containers ...
	I0229 18:58:07.665378   47919 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0229 18:58:07.665433   47919 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 18:58:07.713980   47919 cri.go:89] found id: ""
	I0229 18:58:07.714045   47919 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0229 18:58:07.740723   47919 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 18:58:07.753838   47919 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 18:58:07.753914   47919 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 18:58:07.767175   47919 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0229 18:58:07.767197   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:58:07.902881   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:58:08.741237   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:58:08.970287   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:58:09.099101   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:58:09.214816   47919 api_server.go:52] waiting for apiserver process to appear ...
	I0229 18:58:09.214897   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:09.715311   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:06.583750   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:09.083063   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:09.517694   48088 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0229 18:58:09.517720   48088 machine.go:91] provisioned docker machine in 861.950931ms
	I0229 18:58:09.517732   48088 start.go:300] post-start starting for "default-k8s-diff-port-153528" (driver="kvm2")
	I0229 18:58:09.517742   48088 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 18:58:09.517755   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .DriverName
	I0229 18:58:09.518097   48088 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 18:58:09.518134   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHHostname
	I0229 18:58:09.520915   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:09.521255   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ec:2b", ip: ""} in network mk-default-k8s-diff-port-153528: {Iface:virbr3 ExpiryTime:2024-02-29 19:58:02 +0000 UTC Type:0 Mac:52:54:00:78:ec:2b Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:default-k8s-diff-port-153528 Clientid:01:52:54:00:78:ec:2b}
	I0229 18:58:09.521285   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined IP address 192.168.39.210 and MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:09.521389   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHPort
	I0229 18:58:09.521590   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHKeyPath
	I0229 18:58:09.521761   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHUsername
	I0229 18:58:09.521911   48088 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/default-k8s-diff-port-153528/id_rsa Username:docker}
	I0229 18:58:09.606485   48088 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 18:58:09.611376   48088 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 18:58:09.611404   48088 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6428/.minikube/addons for local assets ...
	I0229 18:58:09.611468   48088 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6428/.minikube/files for local assets ...
	I0229 18:58:09.611564   48088 filesync.go:149] local asset: /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem -> 136512.pem in /etc/ssl/certs
	I0229 18:58:09.611679   48088 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 18:58:09.621573   48088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem --> /etc/ssl/certs/136512.pem (1708 bytes)
	I0229 18:58:09.648803   48088 start.go:303] post-start completed in 131.058856ms
	I0229 18:58:09.648825   48088 fix.go:56] fixHost completed within 20.839852585s
	I0229 18:58:09.648848   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHHostname
	I0229 18:58:09.651416   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:09.651743   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ec:2b", ip: ""} in network mk-default-k8s-diff-port-153528: {Iface:virbr3 ExpiryTime:2024-02-29 19:58:02 +0000 UTC Type:0 Mac:52:54:00:78:ec:2b Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:default-k8s-diff-port-153528 Clientid:01:52:54:00:78:ec:2b}
	I0229 18:58:09.651771   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined IP address 192.168.39.210 and MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:09.651917   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHPort
	I0229 18:58:09.652114   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHKeyPath
	I0229 18:58:09.652273   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHKeyPath
	I0229 18:58:09.652392   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHUsername
	I0229 18:58:09.652563   48088 main.go:141] libmachine: Using SSH client type: native
	I0229 18:58:09.652715   48088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0229 18:58:09.652728   48088 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 18:58:09.760132   48088 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709233089.743154671
	
	I0229 18:58:09.760154   48088 fix.go:206] guest clock: 1709233089.743154671
	I0229 18:58:09.760160   48088 fix.go:219] Guest: 2024-02-29 18:58:09.743154671 +0000 UTC Remote: 2024-02-29 18:58:09.648829212 +0000 UTC m=+270.421886207 (delta=94.325459ms)
	I0229 18:58:09.760177   48088 fix.go:190] guest clock delta is within tolerance: 94.325459ms
	I0229 18:58:09.760183   48088 start.go:83] releasing machines lock for "default-k8s-diff-port-153528", held for 20.951247697s
	I0229 18:58:09.760211   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .DriverName
	I0229 18:58:09.760473   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetIP
	I0229 18:58:09.763342   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:09.763701   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ec:2b", ip: ""} in network mk-default-k8s-diff-port-153528: {Iface:virbr3 ExpiryTime:2024-02-29 19:58:02 +0000 UTC Type:0 Mac:52:54:00:78:ec:2b Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:default-k8s-diff-port-153528 Clientid:01:52:54:00:78:ec:2b}
	I0229 18:58:09.763746   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined IP address 192.168.39.210 and MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:09.763896   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .DriverName
	I0229 18:58:09.764519   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .DriverName
	I0229 18:58:09.764720   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .DriverName
	I0229 18:58:09.764801   48088 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 18:58:09.764849   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHHostname
	I0229 18:58:09.764951   48088 ssh_runner.go:195] Run: cat /version.json
	I0229 18:58:09.764981   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHHostname
	I0229 18:58:09.767670   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:09.767861   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:09.768035   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ec:2b", ip: ""} in network mk-default-k8s-diff-port-153528: {Iface:virbr3 ExpiryTime:2024-02-29 19:58:02 +0000 UTC Type:0 Mac:52:54:00:78:ec:2b Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:default-k8s-diff-port-153528 Clientid:01:52:54:00:78:ec:2b}
	I0229 18:58:09.768054   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined IP address 192.168.39.210 and MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:09.768204   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHPort
	I0229 18:58:09.768322   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ec:2b", ip: ""} in network mk-default-k8s-diff-port-153528: {Iface:virbr3 ExpiryTime:2024-02-29 19:58:02 +0000 UTC Type:0 Mac:52:54:00:78:ec:2b Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:default-k8s-diff-port-153528 Clientid:01:52:54:00:78:ec:2b}
	I0229 18:58:09.768345   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined IP address 192.168.39.210 and MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:09.768347   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHKeyPath
	I0229 18:58:09.768504   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHUsername
	I0229 18:58:09.768518   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHPort
	I0229 18:58:09.768673   48088 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/default-k8s-diff-port-153528/id_rsa Username:docker}
	I0229 18:58:09.768694   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHKeyPath
	I0229 18:58:09.768890   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHUsername
	I0229 18:58:09.769024   48088 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/default-k8s-diff-port-153528/id_rsa Username:docker}
	I0229 18:58:09.849055   48088 ssh_runner.go:195] Run: systemctl --version
	I0229 18:58:09.872309   48088 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0229 18:58:10.015348   48088 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 18:58:10.023333   48088 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 18:58:10.023405   48088 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 18:58:10.042264   48088 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 18:58:10.042288   48088 start.go:475] detecting cgroup driver to use...
	I0229 18:58:10.042361   48088 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 18:58:10.062390   48088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 18:58:10.080651   48088 docker.go:217] disabling cri-docker service (if available) ...
	I0229 18:58:10.080714   48088 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 18:58:10.098478   48088 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 18:58:10.115610   48088 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 18:58:10.250212   48088 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 18:58:10.402800   48088 docker.go:233] disabling docker service ...
	I0229 18:58:10.402862   48088 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 18:58:10.419793   48088 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 18:58:10.435149   48088 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 18:58:10.589671   48088 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 18:58:10.714460   48088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 18:58:10.730820   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 18:58:10.753910   48088 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0229 18:58:10.753977   48088 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:58:10.766151   48088 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0229 18:58:10.766232   48088 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:58:10.778824   48088 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:58:10.792936   48088 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:58:10.810158   48088 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 18:58:10.828150   48088 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 18:58:10.843416   48088 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 18:58:10.843488   48088 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0229 18:58:10.866488   48088 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 18:58:10.880628   48088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:58:11.031221   48088 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0229 18:58:11.199068   48088 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0229 18:58:11.199143   48088 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0229 18:58:11.204851   48088 start.go:543] Will wait 60s for crictl version
	I0229 18:58:11.204922   48088 ssh_runner.go:195] Run: which crictl
	I0229 18:58:11.209384   48088 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 18:58:11.256928   48088 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0229 18:58:11.257014   48088 ssh_runner.go:195] Run: crio --version
	I0229 18:58:11.293338   48088 ssh_runner.go:195] Run: crio --version
	I0229 18:58:11.329107   48088 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0229 18:58:09.783970   47515 main.go:141] libmachine: (no-preload-247197) Calling .Start
	I0229 18:58:09.784127   47515 main.go:141] libmachine: (no-preload-247197) Ensuring networks are active...
	I0229 18:58:09.784926   47515 main.go:141] libmachine: (no-preload-247197) Ensuring network default is active
	I0229 18:58:09.785291   47515 main.go:141] libmachine: (no-preload-247197) Ensuring network mk-no-preload-247197 is active
	I0229 18:58:09.785654   47515 main.go:141] libmachine: (no-preload-247197) Getting domain xml...
	I0229 18:58:09.786319   47515 main.go:141] libmachine: (no-preload-247197) Creating domain...
	I0229 18:58:11.102135   47515 main.go:141] libmachine: (no-preload-247197) Waiting to get IP...
	I0229 18:58:11.102911   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:11.103333   47515 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:58:11.103414   47515 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:58:11.103321   49001 retry.go:31] will retry after 205.990392ms: waiting for machine to come up
	I0229 18:58:11.310742   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:11.311298   47515 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:58:11.311327   47515 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:58:11.311247   49001 retry.go:31] will retry after 353.756736ms: waiting for machine to come up
	I0229 18:58:11.666882   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:11.667361   47515 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:58:11.667392   47515 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:58:11.667319   49001 retry.go:31] will retry after 308.284801ms: waiting for machine to come up
	I0229 18:58:11.976805   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:11.977355   47515 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:58:11.977385   47515 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:58:11.977309   49001 retry.go:31] will retry after 481.108836ms: waiting for machine to come up
	I0229 18:58:12.459764   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:12.460292   47515 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:58:12.460330   47515 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:58:12.460253   49001 retry.go:31] will retry after 549.22451ms: waiting for machine to come up
	I0229 18:58:11.330594   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetIP
	I0229 18:58:11.333628   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:11.334080   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ec:2b", ip: ""} in network mk-default-k8s-diff-port-153528: {Iface:virbr3 ExpiryTime:2024-02-29 19:58:02 +0000 UTC Type:0 Mac:52:54:00:78:ec:2b Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:default-k8s-diff-port-153528 Clientid:01:52:54:00:78:ec:2b}
	I0229 18:58:11.334112   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined IP address 192.168.39.210 and MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 18:58:11.334361   48088 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0229 18:58:11.339127   48088 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:58:11.353078   48088 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0229 18:58:11.353129   48088 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 18:58:11.392503   48088 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0229 18:58:11.392573   48088 ssh_runner.go:195] Run: which lz4
	I0229 18:58:11.398589   48088 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0229 18:58:11.405052   48088 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 18:58:11.405091   48088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0229 18:58:13.428402   48088 crio.go:444] Took 2.029836 seconds to copy over tarball
	I0229 18:58:13.428481   48088 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 18:58:10.215640   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:10.715115   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:11.215866   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:11.715307   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:12.215171   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:12.715206   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:13.215153   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:13.715048   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:14.215148   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:14.715628   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:11.084645   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:13.087354   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:13.011239   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:13.011724   47515 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:58:13.011751   47515 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:58:13.011676   49001 retry.go:31] will retry after 662.346902ms: waiting for machine to come up
	I0229 18:58:13.675622   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:13.676179   47515 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:58:13.676208   47515 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:58:13.676115   49001 retry.go:31] will retry after 761.484123ms: waiting for machine to come up
	I0229 18:58:14.439091   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:14.439599   47515 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:58:14.439626   47515 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:58:14.439546   49001 retry.go:31] will retry after 980.352556ms: waiting for machine to come up
	I0229 18:58:15.421962   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:15.422377   47515 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:58:15.422405   47515 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:58:15.422314   49001 retry.go:31] will retry after 1.134741057s: waiting for machine to come up
	I0229 18:58:16.558585   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:16.559071   47515 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:58:16.559097   47515 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:58:16.559005   49001 retry.go:31] will retry after 2.299052603s: waiting for machine to come up
	I0229 18:58:16.327243   48088 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.898733984s)
	I0229 18:58:16.327277   48088 crio.go:451] Took 2.898846 seconds to extract the tarball
	I0229 18:58:16.327289   48088 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 18:58:16.374029   48088 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 18:58:16.425625   48088 crio.go:496] all images are preloaded for cri-o runtime.
	I0229 18:58:16.425654   48088 cache_images.go:84] Images are preloaded, skipping loading
	I0229 18:58:16.425740   48088 ssh_runner.go:195] Run: crio config
	I0229 18:58:16.477353   48088 cni.go:84] Creating CNI manager for ""
	I0229 18:58:16.477382   48088 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 18:58:16.477406   48088 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 18:58:16.477447   48088 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.210 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-153528 NodeName:default-k8s-diff-port-153528 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.210"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.210 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 18:58:16.477595   48088 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.210
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-153528"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.210
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.210"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 18:58:16.477659   48088 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-153528 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.210
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-153528 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0229 18:58:16.477718   48088 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0229 18:58:16.489240   48088 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 18:58:16.489301   48088 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 18:58:16.500764   48088 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0229 18:58:16.522927   48088 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 18:58:16.543902   48088 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I0229 18:58:16.565262   48088 ssh_runner.go:195] Run: grep 192.168.39.210	control-plane.minikube.internal$ /etc/hosts
	I0229 18:58:16.571163   48088 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.210	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:58:16.585476   48088 certs.go:56] Setting up /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/default-k8s-diff-port-153528 for IP: 192.168.39.210
	I0229 18:58:16.585507   48088 certs.go:190] acquiring lock for shared ca certs: {Name:mk3505d2468da66ac20dc3ebf913782cecf1ba0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:58:16.585657   48088 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18259-6428/.minikube/ca.key
	I0229 18:58:16.585704   48088 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.key
	I0229 18:58:16.585772   48088 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/default-k8s-diff-port-153528/client.key
	I0229 18:58:16.647093   48088 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/default-k8s-diff-port-153528/apiserver.key.6213553a
	I0229 18:58:16.647194   48088 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/default-k8s-diff-port-153528/proxy-client.key
	I0229 18:58:16.647398   48088 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651.pem (1338 bytes)
	W0229 18:58:16.647463   48088 certs.go:433] ignoring /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651_empty.pem, impossibly tiny 0 bytes
	I0229 18:58:16.647476   48088 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem (1675 bytes)
	I0229 18:58:16.647501   48088 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem (1082 bytes)
	I0229 18:58:16.647527   48088 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem (1123 bytes)
	I0229 18:58:16.647553   48088 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem (1675 bytes)
	I0229 18:58:16.647591   48088 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem (1708 bytes)
	I0229 18:58:16.648235   48088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/default-k8s-diff-port-153528/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 18:58:16.678452   48088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/default-k8s-diff-port-153528/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0229 18:58:16.708360   48088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/default-k8s-diff-port-153528/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 18:58:16.740905   48088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/default-k8s-diff-port-153528/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 18:58:16.768820   48088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 18:58:16.799459   48088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0229 18:58:16.829488   48088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 18:58:16.860881   48088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0229 18:58:16.893064   48088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 18:58:16.923404   48088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651.pem --> /usr/share/ca-certificates/13651.pem (1338 bytes)
	I0229 18:58:16.952531   48088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem --> /usr/share/ca-certificates/136512.pem (1708 bytes)
	I0229 18:58:16.980895   48088 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 18:58:17.001306   48088 ssh_runner.go:195] Run: openssl version
	I0229 18:58:17.007995   48088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136512.pem && ln -fs /usr/share/ca-certificates/136512.pem /etc/ssl/certs/136512.pem"
	I0229 18:58:17.024000   48088 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136512.pem
	I0229 18:58:17.030471   48088 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 17:49 /usr/share/ca-certificates/136512.pem
	I0229 18:58:17.030544   48088 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136512.pem
	I0229 18:58:17.038306   48088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136512.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 18:58:17.050985   48088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 18:58:17.063089   48088 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:58:17.068437   48088 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 17:40 /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:58:17.068485   48088 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:58:17.075156   48088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 18:58:17.087015   48088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13651.pem && ln -fs /usr/share/ca-certificates/13651.pem /etc/ssl/certs/13651.pem"
	I0229 18:58:17.099964   48088 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13651.pem
	I0229 18:58:17.105272   48088 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 17:49 /usr/share/ca-certificates/13651.pem
	I0229 18:58:17.105333   48088 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13651.pem
	I0229 18:58:17.112447   48088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13651.pem /etc/ssl/certs/51391683.0"
	I0229 18:58:17.126499   48088 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 18:58:17.133216   48088 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0229 18:58:17.140320   48088 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0229 18:58:17.147900   48088 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0229 18:58:17.154931   48088 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0229 18:58:17.163552   48088 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0229 18:58:17.172256   48088 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0229 18:58:17.181349   48088 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-153528 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-153528 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.210 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:58:17.181481   48088 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0229 18:58:17.181554   48088 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 18:58:17.227444   48088 cri.go:89] found id: ""
	I0229 18:58:17.227532   48088 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 18:58:17.242533   48088 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0229 18:58:17.242562   48088 kubeadm.go:636] restartCluster start
	I0229 18:58:17.242616   48088 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0229 18:58:17.254713   48088 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:17.256305   48088 kubeconfig.go:92] found "default-k8s-diff-port-153528" server: "https://192.168.39.210:8444"
	I0229 18:58:17.259432   48088 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0229 18:58:17.281454   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:17.281525   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:17.295342   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:17.781719   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:17.781807   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:17.797462   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:18.281981   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:18.282082   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:18.300449   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:18.781952   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:18.782024   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:18.796641   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:15.215935   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:15.714969   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:16.215921   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:16.715200   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:17.215151   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:17.715520   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:18.215291   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:18.715662   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:19.215157   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:19.715037   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:15.585143   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:18.086077   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:18.861140   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:18.861635   47515 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:58:18.861658   47515 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:58:18.861584   49001 retry.go:31] will retry after 2.115098542s: waiting for machine to come up
	I0229 18:58:20.978165   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:20.978625   47515 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:58:20.978658   47515 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:58:20.978570   49001 retry.go:31] will retry after 3.520116791s: waiting for machine to come up
	I0229 18:58:19.282008   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:19.282093   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:19.297806   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:19.782384   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:19.782465   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:19.802496   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:20.281712   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:20.281777   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:20.298545   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:20.782139   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:20.782249   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:20.799615   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:21.282180   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:21.282288   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:21.297649   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:21.782263   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:21.782341   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:21.797537   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:22.282131   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:22.282211   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:22.303084   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:22.781558   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:22.781645   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:22.797155   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:23.281645   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:23.281727   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:23.296059   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:23.781581   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:23.781663   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:23.797132   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:20.215501   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:20.715745   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:21.214953   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:21.715762   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:22.215608   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:22.715556   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:23.215633   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:23.715012   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:24.215182   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:24.715944   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:20.585475   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:22.586962   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:25.082804   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:24.503134   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:24.503537   47515 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:58:24.503561   47515 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:58:24.503495   49001 retry.go:31] will retry after 3.056941725s: waiting for machine to come up
	I0229 18:58:27.562228   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:27.562698   47515 main.go:141] libmachine: (no-preload-247197) DBG | unable to find current IP address of domain no-preload-247197 in network mk-no-preload-247197
	I0229 18:58:27.562729   47515 main.go:141] libmachine: (no-preload-247197) DBG | I0229 18:58:27.562650   49001 retry.go:31] will retry after 5.535128197s: waiting for machine to come up
	I0229 18:58:24.282207   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:24.282273   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:24.298683   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:24.781997   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:24.782088   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:24.796544   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:25.282137   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:25.282249   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:25.297916   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:25.782489   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:25.782605   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:25.800171   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:26.281679   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:26.281764   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:26.296395   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:26.781581   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:26.781700   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:26.796380   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:27.282230   48088 api_server.go:166] Checking apiserver status ...
	I0229 18:58:27.282319   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:27.300719   48088 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:27.300745   48088 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0229 18:58:27.300753   48088 kubeadm.go:1135] stopping kube-system containers ...
	I0229 18:58:27.300762   48088 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0229 18:58:27.300822   48088 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 18:58:27.344465   48088 cri.go:89] found id: ""
	I0229 18:58:27.344525   48088 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0229 18:58:27.367244   48088 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 18:58:27.379831   48088 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 18:58:27.379895   48088 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 18:58:27.390372   48088 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0229 18:58:27.390393   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:58:27.521441   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:58:28.070547   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:58:28.324425   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:58:28.416807   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:58:28.485785   48088 api_server.go:52] waiting for apiserver process to appear ...
	I0229 18:58:28.485880   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:28.986473   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:25.215272   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:25.715667   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:26.215566   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:26.715860   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:27.214993   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:27.715679   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:28.215093   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:28.715081   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:29.215188   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:29.715981   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:27.585150   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:29.585716   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:29.486136   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:29.512004   48088 api_server.go:72] duration metric: took 1.026225672s to wait for apiserver process to appear ...
	I0229 18:58:29.512036   48088 api_server.go:88] waiting for apiserver healthz status ...
	I0229 18:58:29.512081   48088 api_server.go:253] Checking apiserver healthz at https://192.168.39.210:8444/healthz ...
	I0229 18:58:29.512602   48088 api_server.go:269] stopped: https://192.168.39.210:8444/healthz: Get "https://192.168.39.210:8444/healthz": dial tcp 192.168.39.210:8444: connect: connection refused
	I0229 18:58:30.012197   48088 api_server.go:253] Checking apiserver healthz at https://192.168.39.210:8444/healthz ...
	I0229 18:58:33.076090   48088 api_server.go:279] https://192.168.39.210:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 18:58:33.076122   48088 api_server.go:103] status: https://192.168.39.210:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 18:58:33.076141   48088 api_server.go:253] Checking apiserver healthz at https://192.168.39.210:8444/healthz ...
	I0229 18:58:33.115044   48088 api_server.go:279] https://192.168.39.210:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 18:58:33.115082   48088 api_server.go:103] status: https://192.168.39.210:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 18:58:33.512305   48088 api_server.go:253] Checking apiserver healthz at https://192.168.39.210:8444/healthz ...
	I0229 18:58:33.518615   48088 api_server.go:279] https://192.168.39.210:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 18:58:33.518640   48088 api_server.go:103] status: https://192.168.39.210:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 18:58:34.012514   48088 api_server.go:253] Checking apiserver healthz at https://192.168.39.210:8444/healthz ...
	I0229 18:58:34.024771   48088 api_server.go:279] https://192.168.39.210:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 18:58:34.024809   48088 api_server.go:103] status: https://192.168.39.210:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 18:58:34.512427   48088 api_server.go:253] Checking apiserver healthz at https://192.168.39.210:8444/healthz ...
	I0229 18:58:34.519703   48088 api_server.go:279] https://192.168.39.210:8444/healthz returned 200:
	ok
	I0229 18:58:34.527814   48088 api_server.go:141] control plane version: v1.28.4
	I0229 18:58:34.527850   48088 api_server.go:131] duration metric: took 5.015799681s to wait for apiserver health ...
	I0229 18:58:34.527862   48088 cni.go:84] Creating CNI manager for ""
	I0229 18:58:34.527869   48088 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 18:58:34.529573   48088 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 18:58:30.215544   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:30.715080   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:31.215386   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:31.715180   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:32.215078   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:32.715087   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:33.215842   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:33.714950   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:34.215778   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:34.715201   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:32.084243   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:34.087247   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:33.099983   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:33.100527   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has current primary IP address 192.168.50.72 and MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:33.100548   47515 main.go:141] libmachine: (no-preload-247197) Found IP for machine: 192.168.50.72
	I0229 18:58:33.100584   47515 main.go:141] libmachine: (no-preload-247197) Reserving static IP address...
	I0229 18:58:33.100959   47515 main.go:141] libmachine: (no-preload-247197) DBG | found host DHCP lease matching {name: "no-preload-247197", mac: "52:54:00:2c:2f:53", ip: "192.168.50.72"} in network mk-no-preload-247197: {Iface:virbr2 ExpiryTime:2024-02-29 19:58:22 +0000 UTC Type:0 Mac:52:54:00:2c:2f:53 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:no-preload-247197 Clientid:01:52:54:00:2c:2f:53}
	I0229 18:58:33.100985   47515 main.go:141] libmachine: (no-preload-247197) DBG | skip adding static IP to network mk-no-preload-247197 - found existing host DHCP lease matching {name: "no-preload-247197", mac: "52:54:00:2c:2f:53", ip: "192.168.50.72"}
	I0229 18:58:33.100999   47515 main.go:141] libmachine: (no-preload-247197) Reserved static IP address: 192.168.50.72
	I0229 18:58:33.101016   47515 main.go:141] libmachine: (no-preload-247197) Waiting for SSH to be available...
	I0229 18:58:33.101057   47515 main.go:141] libmachine: (no-preload-247197) DBG | Getting to WaitForSSH function...
	I0229 18:58:33.103439   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:33.103766   47515 main.go:141] libmachine: (no-preload-247197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2f:53", ip: ""} in network mk-no-preload-247197: {Iface:virbr2 ExpiryTime:2024-02-29 19:58:22 +0000 UTC Type:0 Mac:52:54:00:2c:2f:53 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:no-preload-247197 Clientid:01:52:54:00:2c:2f:53}
	I0229 18:58:33.103817   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined IP address 192.168.50.72 and MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:33.104041   47515 main.go:141] libmachine: (no-preload-247197) DBG | Using SSH client type: external
	I0229 18:58:33.104069   47515 main.go:141] libmachine: (no-preload-247197) DBG | Using SSH private key: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/no-preload-247197/id_rsa (-rw-------)
	I0229 18:58:33.104110   47515 main.go:141] libmachine: (no-preload-247197) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.72 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18259-6428/.minikube/machines/no-preload-247197/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 18:58:33.104131   47515 main.go:141] libmachine: (no-preload-247197) DBG | About to run SSH command:
	I0229 18:58:33.104145   47515 main.go:141] libmachine: (no-preload-247197) DBG | exit 0
	I0229 18:58:33.240401   47515 main.go:141] libmachine: (no-preload-247197) DBG | SSH cmd err, output: <nil>: 
	I0229 18:58:33.240811   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetConfigRaw
	I0229 18:58:33.241500   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetIP
	I0229 18:58:33.244578   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:33.244970   47515 main.go:141] libmachine: (no-preload-247197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2f:53", ip: ""} in network mk-no-preload-247197: {Iface:virbr2 ExpiryTime:2024-02-29 19:58:22 +0000 UTC Type:0 Mac:52:54:00:2c:2f:53 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:no-preload-247197 Clientid:01:52:54:00:2c:2f:53}
	I0229 18:58:33.245002   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined IP address 192.168.50.72 and MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:33.245358   47515 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/no-preload-247197/config.json ...
	I0229 18:58:33.245522   47515 machine.go:88] provisioning docker machine ...
	I0229 18:58:33.245542   47515 main.go:141] libmachine: (no-preload-247197) Calling .DriverName
	I0229 18:58:33.245755   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetMachineName
	I0229 18:58:33.245935   47515 buildroot.go:166] provisioning hostname "no-preload-247197"
	I0229 18:58:33.245977   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetMachineName
	I0229 18:58:33.246175   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHHostname
	I0229 18:58:33.248841   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:33.249263   47515 main.go:141] libmachine: (no-preload-247197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2f:53", ip: ""} in network mk-no-preload-247197: {Iface:virbr2 ExpiryTime:2024-02-29 19:58:22 +0000 UTC Type:0 Mac:52:54:00:2c:2f:53 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:no-preload-247197 Clientid:01:52:54:00:2c:2f:53}
	I0229 18:58:33.249284   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined IP address 192.168.50.72 and MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:33.249458   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHPort
	I0229 18:58:33.249629   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHKeyPath
	I0229 18:58:33.249767   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHKeyPath
	I0229 18:58:33.249946   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHUsername
	I0229 18:58:33.250120   47515 main.go:141] libmachine: Using SSH client type: native
	I0229 18:58:33.250335   47515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.72 22 <nil> <nil>}
	I0229 18:58:33.250351   47515 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-247197 && echo "no-preload-247197" | sudo tee /etc/hostname
	I0229 18:58:33.386175   47515 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-247197
	
	I0229 18:58:33.386210   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHHostname
	I0229 18:58:33.389491   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:33.389909   47515 main.go:141] libmachine: (no-preload-247197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2f:53", ip: ""} in network mk-no-preload-247197: {Iface:virbr2 ExpiryTime:2024-02-29 19:58:22 +0000 UTC Type:0 Mac:52:54:00:2c:2f:53 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:no-preload-247197 Clientid:01:52:54:00:2c:2f:53}
	I0229 18:58:33.389950   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined IP address 192.168.50.72 and MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:33.390080   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHPort
	I0229 18:58:33.390288   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHKeyPath
	I0229 18:58:33.390495   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHKeyPath
	I0229 18:58:33.390678   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHUsername
	I0229 18:58:33.390844   47515 main.go:141] libmachine: Using SSH client type: native
	I0229 18:58:33.391058   47515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.72 22 <nil> <nil>}
	I0229 18:58:33.391090   47515 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-247197' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-247197/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-247197' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 18:58:33.510209   47515 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 18:58:33.510243   47515 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18259-6428/.minikube CaCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18259-6428/.minikube}
	I0229 18:58:33.510263   47515 buildroot.go:174] setting up certificates
	I0229 18:58:33.510273   47515 provision.go:83] configureAuth start
	I0229 18:58:33.510281   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetMachineName
	I0229 18:58:33.510582   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetIP
	I0229 18:58:33.513357   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:33.513741   47515 main.go:141] libmachine: (no-preload-247197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2f:53", ip: ""} in network mk-no-preload-247197: {Iface:virbr2 ExpiryTime:2024-02-29 19:58:22 +0000 UTC Type:0 Mac:52:54:00:2c:2f:53 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:no-preload-247197 Clientid:01:52:54:00:2c:2f:53}
	I0229 18:58:33.513769   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined IP address 192.168.50.72 and MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:33.513936   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHHostname
	I0229 18:58:33.516227   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:33.516513   47515 main.go:141] libmachine: (no-preload-247197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2f:53", ip: ""} in network mk-no-preload-247197: {Iface:virbr2 ExpiryTime:2024-02-29 19:58:22 +0000 UTC Type:0 Mac:52:54:00:2c:2f:53 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:no-preload-247197 Clientid:01:52:54:00:2c:2f:53}
	I0229 18:58:33.516543   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined IP address 192.168.50.72 and MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:33.516700   47515 provision.go:138] copyHostCerts
	I0229 18:58:33.516746   47515 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem, removing ...
	I0229 18:58:33.516761   47515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem
	I0229 18:58:33.516824   47515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem (1123 bytes)
	I0229 18:58:33.516931   47515 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem, removing ...
	I0229 18:58:33.516952   47515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem
	I0229 18:58:33.516987   47515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem (1675 bytes)
	I0229 18:58:33.517066   47515 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem, removing ...
	I0229 18:58:33.517077   47515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem
	I0229 18:58:33.517106   47515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem (1082 bytes)
	I0229 18:58:33.517181   47515 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem org=jenkins.no-preload-247197 san=[192.168.50.72 192.168.50.72 localhost 127.0.0.1 minikube no-preload-247197]
	I0229 18:58:33.651858   47515 provision.go:172] copyRemoteCerts
	I0229 18:58:33.651914   47515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 18:58:33.651936   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHHostname
	I0229 18:58:33.655072   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:33.655551   47515 main.go:141] libmachine: (no-preload-247197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2f:53", ip: ""} in network mk-no-preload-247197: {Iface:virbr2 ExpiryTime:2024-02-29 19:58:22 +0000 UTC Type:0 Mac:52:54:00:2c:2f:53 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:no-preload-247197 Clientid:01:52:54:00:2c:2f:53}
	I0229 18:58:33.655584   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined IP address 192.168.50.72 and MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:33.655776   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHPort
	I0229 18:58:33.655952   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHKeyPath
	I0229 18:58:33.656103   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHUsername
	I0229 18:58:33.656277   47515 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/no-preload-247197/id_rsa Username:docker}
	I0229 18:58:33.747197   47515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0229 18:58:33.776690   47515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0229 18:58:33.804404   47515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 18:58:33.831068   47515 provision.go:86] duration metric: configureAuth took 320.782451ms
	I0229 18:58:33.831105   47515 buildroot.go:189] setting minikube options for container-runtime
	I0229 18:58:33.831336   47515 config.go:182] Loaded profile config "no-preload-247197": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0229 18:58:33.831469   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHHostname
	I0229 18:58:33.834209   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:33.834617   47515 main.go:141] libmachine: (no-preload-247197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2f:53", ip: ""} in network mk-no-preload-247197: {Iface:virbr2 ExpiryTime:2024-02-29 19:58:22 +0000 UTC Type:0 Mac:52:54:00:2c:2f:53 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:no-preload-247197 Clientid:01:52:54:00:2c:2f:53}
	I0229 18:58:33.834650   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined IP address 192.168.50.72 and MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:33.834845   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHPort
	I0229 18:58:33.835046   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHKeyPath
	I0229 18:58:33.835215   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHKeyPath
	I0229 18:58:33.835343   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHUsername
	I0229 18:58:33.835503   47515 main.go:141] libmachine: Using SSH client type: native
	I0229 18:58:33.835694   47515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.72 22 <nil> <nil>}
	I0229 18:58:33.835717   47515 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0229 18:58:34.141350   47515 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0229 18:58:34.141372   47515 machine.go:91] provisioned docker machine in 895.837431ms
	I0229 18:58:34.141385   47515 start.go:300] post-start starting for "no-preload-247197" (driver="kvm2")
	I0229 18:58:34.141399   47515 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 18:58:34.141422   47515 main.go:141] libmachine: (no-preload-247197) Calling .DriverName
	I0229 18:58:34.141763   47515 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 18:58:34.141800   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHHostname
	I0229 18:58:34.144673   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:34.145078   47515 main.go:141] libmachine: (no-preload-247197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2f:53", ip: ""} in network mk-no-preload-247197: {Iface:virbr2 ExpiryTime:2024-02-29 19:58:22 +0000 UTC Type:0 Mac:52:54:00:2c:2f:53 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:no-preload-247197 Clientid:01:52:54:00:2c:2f:53}
	I0229 18:58:34.145106   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined IP address 192.168.50.72 and MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:34.145225   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHPort
	I0229 18:58:34.145387   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHKeyPath
	I0229 18:58:34.145509   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHUsername
	I0229 18:58:34.145618   47515 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/no-preload-247197/id_rsa Username:docker}
	I0229 18:58:34.241817   47515 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 18:58:34.247096   47515 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 18:58:34.247125   47515 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6428/.minikube/addons for local assets ...
	I0229 18:58:34.247200   47515 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6428/.minikube/files for local assets ...
	I0229 18:58:34.247294   47515 filesync.go:149] local asset: /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem -> 136512.pem in /etc/ssl/certs
	I0229 18:58:34.247386   47515 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 18:58:34.261959   47515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem --> /etc/ssl/certs/136512.pem (1708 bytes)
	I0229 18:58:34.293974   47515 start.go:303] post-start completed in 152.574202ms
	I0229 18:58:34.294000   47515 fix.go:56] fixHost completed within 24.533673806s
	I0229 18:58:34.294031   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHHostname
	I0229 18:58:34.297066   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:34.297455   47515 main.go:141] libmachine: (no-preload-247197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2f:53", ip: ""} in network mk-no-preload-247197: {Iface:virbr2 ExpiryTime:2024-02-29 19:58:22 +0000 UTC Type:0 Mac:52:54:00:2c:2f:53 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:no-preload-247197 Clientid:01:52:54:00:2c:2f:53}
	I0229 18:58:34.297480   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined IP address 192.168.50.72 and MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:34.297685   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHPort
	I0229 18:58:34.297865   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHKeyPath
	I0229 18:58:34.298064   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHKeyPath
	I0229 18:58:34.298256   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHUsername
	I0229 18:58:34.298448   47515 main.go:141] libmachine: Using SSH client type: native
	I0229 18:58:34.298671   47515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.72 22 <nil> <nil>}
	I0229 18:58:34.298687   47515 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 18:58:34.416701   47515 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709233114.391433365
	
	I0229 18:58:34.416724   47515 fix.go:206] guest clock: 1709233114.391433365
	I0229 18:58:34.416733   47515 fix.go:219] Guest: 2024-02-29 18:58:34.391433365 +0000 UTC Remote: 2024-02-29 18:58:34.294005249 +0000 UTC m=+366.458860154 (delta=97.428116ms)
	I0229 18:58:34.416763   47515 fix.go:190] guest clock delta is within tolerance: 97.428116ms
	I0229 18:58:34.416770   47515 start.go:83] releasing machines lock for "no-preload-247197", held for 24.656479144s
	I0229 18:58:34.416795   47515 main.go:141] libmachine: (no-preload-247197) Calling .DriverName
	I0229 18:58:34.417031   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetIP
	I0229 18:58:34.419713   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:34.420098   47515 main.go:141] libmachine: (no-preload-247197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2f:53", ip: ""} in network mk-no-preload-247197: {Iface:virbr2 ExpiryTime:2024-02-29 19:58:22 +0000 UTC Type:0 Mac:52:54:00:2c:2f:53 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:no-preload-247197 Clientid:01:52:54:00:2c:2f:53}
	I0229 18:58:34.420129   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined IP address 192.168.50.72 and MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:34.420288   47515 main.go:141] libmachine: (no-preload-247197) Calling .DriverName
	I0229 18:58:34.420789   47515 main.go:141] libmachine: (no-preload-247197) Calling .DriverName
	I0229 18:58:34.420989   47515 main.go:141] libmachine: (no-preload-247197) Calling .DriverName
	I0229 18:58:34.421076   47515 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 18:58:34.421125   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHHostname
	I0229 18:58:34.421239   47515 ssh_runner.go:195] Run: cat /version.json
	I0229 18:58:34.421268   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHHostname
	I0229 18:58:34.424047   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:34.424359   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:34.424399   47515 main.go:141] libmachine: (no-preload-247197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2f:53", ip: ""} in network mk-no-preload-247197: {Iface:virbr2 ExpiryTime:2024-02-29 19:58:22 +0000 UTC Type:0 Mac:52:54:00:2c:2f:53 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:no-preload-247197 Clientid:01:52:54:00:2c:2f:53}
	I0229 18:58:34.424418   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined IP address 192.168.50.72 and MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:34.424564   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHPort
	I0229 18:58:34.424731   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHKeyPath
	I0229 18:58:34.424803   47515 main.go:141] libmachine: (no-preload-247197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2f:53", ip: ""} in network mk-no-preload-247197: {Iface:virbr2 ExpiryTime:2024-02-29 19:58:22 +0000 UTC Type:0 Mac:52:54:00:2c:2f:53 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:no-preload-247197 Clientid:01:52:54:00:2c:2f:53}
	I0229 18:58:34.424829   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined IP address 192.168.50.72 and MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:34.424969   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHUsername
	I0229 18:58:34.425124   47515 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/no-preload-247197/id_rsa Username:docker}
	I0229 18:58:34.425217   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHPort
	I0229 18:58:34.425348   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHKeyPath
	I0229 18:58:34.425506   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHUsername
	I0229 18:58:34.425705   47515 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/no-preload-247197/id_rsa Username:docker}
	I0229 18:58:34.505253   47515 ssh_runner.go:195] Run: systemctl --version
	I0229 18:58:34.533780   47515 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0229 18:58:34.696609   47515 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 18:58:34.703768   47515 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 18:58:34.703848   47515 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 18:58:34.723243   47515 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 18:58:34.723271   47515 start.go:475] detecting cgroup driver to use...
	I0229 18:58:34.723342   47515 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 18:58:34.743696   47515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 18:58:34.760022   47515 docker.go:217] disabling cri-docker service (if available) ...
	I0229 18:58:34.760085   47515 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 18:58:34.775217   47515 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 18:58:34.791576   47515 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 18:58:34.920544   47515 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 18:58:35.093684   47515 docker.go:233] disabling docker service ...
	I0229 18:58:35.093760   47515 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 18:58:35.112349   47515 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 18:58:35.128145   47515 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 18:58:35.246120   47515 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 18:58:35.363110   47515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 18:58:35.378087   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 18:58:35.399610   47515 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0229 18:58:35.399658   47515 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:58:35.410579   47515 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0229 18:58:35.410624   47515 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:58:35.421664   47515 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:58:35.432726   47515 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 18:58:35.443728   47515 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 18:58:35.455072   47515 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 18:58:35.467211   47515 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 18:58:35.467263   47515 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0229 18:58:35.480669   47515 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 18:58:35.491649   47515 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:58:35.621272   47515 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0229 18:58:35.793148   47515 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0229 18:58:35.793225   47515 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0229 18:58:35.798495   47515 start.go:543] Will wait 60s for crictl version
	I0229 18:58:35.798556   47515 ssh_runner.go:195] Run: which crictl
	I0229 18:58:35.803756   47515 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 18:58:35.848168   47515 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0229 18:58:35.848259   47515 ssh_runner.go:195] Run: crio --version
	I0229 18:58:35.879346   47515 ssh_runner.go:195] Run: crio --version
	I0229 18:58:35.911939   47515 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0229 18:58:35.913174   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetIP
	I0229 18:58:35.915761   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:35.916134   47515 main.go:141] libmachine: (no-preload-247197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2f:53", ip: ""} in network mk-no-preload-247197: {Iface:virbr2 ExpiryTime:2024-02-29 19:58:22 +0000 UTC Type:0 Mac:52:54:00:2c:2f:53 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:no-preload-247197 Clientid:01:52:54:00:2c:2f:53}
	I0229 18:58:35.916162   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined IP address 192.168.50.72 and MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 18:58:35.916350   47515 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0229 18:58:35.921206   47515 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:58:35.936342   47515 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0229 18:58:35.936375   47515 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 18:58:35.974456   47515 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0229 18:58:35.974475   47515 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0229 18:58:35.974509   47515 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:58:35.974546   47515 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0229 18:58:35.974567   47515 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0229 18:58:35.974613   47515 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0229 18:58:35.974668   47515 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0229 18:58:35.974733   47515 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0229 18:58:35.974780   47515 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0229 18:58:35.975073   47515 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0229 18:58:35.975958   47515 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:58:35.975981   47515 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0229 18:58:35.975993   47515 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0229 18:58:35.976002   47515 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0229 18:58:35.976027   47515 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0229 18:58:35.975963   47515 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0229 18:58:35.975959   47515 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0229 18:58:35.976249   47515 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0229 18:58:36.111205   47515 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0229 18:58:36.124071   47515 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0229 18:58:36.150002   47515 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0229 18:58:36.196158   47515 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0229 18:58:36.258361   47515 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0229 18:58:36.273898   47515 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0229 18:58:36.283390   47515 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0229 18:58:36.336487   47515 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0229 18:58:36.336531   47515 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0229 18:58:36.336541   47515 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0229 18:58:36.336577   47515 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0229 18:58:36.336590   47515 ssh_runner.go:195] Run: which crictl
	I0229 18:58:36.336620   47515 ssh_runner.go:195] Run: which crictl
	I0229 18:58:36.336636   47515 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0229 18:58:36.336661   47515 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0229 18:58:36.336670   47515 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0229 18:58:36.336695   47515 ssh_runner.go:195] Run: which crictl
	I0229 18:58:36.336697   47515 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0229 18:58:36.336723   47515 ssh_runner.go:195] Run: which crictl
	I0229 18:58:36.383302   47515 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0229 18:58:36.383347   47515 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0229 18:58:36.383402   47515 ssh_runner.go:195] Run: which crictl
	I0229 18:58:36.393420   47515 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0229 18:58:36.393444   47515 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0229 18:58:36.393459   47515 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0229 18:58:36.393495   47515 ssh_runner.go:195] Run: which crictl
	I0229 18:58:36.393527   47515 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0229 18:58:36.393579   47515 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0229 18:58:36.393612   47515 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0229 18:58:36.393665   47515 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0229 18:58:36.503611   47515 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0229 18:58:36.503707   47515 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0229 18:58:36.508306   47515 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0229 18:58:36.508403   47515 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0229 18:58:36.511536   47515 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0229 18:58:36.511600   47515 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0229 18:58:36.511636   47515 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0229 18:58:36.511706   47515 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0229 18:58:36.511721   47515 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0229 18:58:36.511749   47515 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0229 18:58:36.511781   47515 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0229 18:58:36.522392   47515 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0229 18:58:36.522413   47515 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0229 18:58:36.522458   47515 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0229 18:58:36.522645   47515 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0229 18:58:36.523319   47515 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0229 18:58:36.529871   47515 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0229 18:58:36.576922   47515 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0229 18:58:36.576994   47515 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0229 18:58:36.577093   47515 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0229 18:58:36.892014   47515 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:58:34.530886   48088 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 18:58:34.547233   48088 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 18:58:34.572237   48088 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 18:58:34.586775   48088 system_pods.go:59] 8 kube-system pods found
	I0229 18:58:34.586816   48088 system_pods.go:61] "coredns-5dd5756b68-tr4nn" [016aff45-17c3-4278-a7f3-1e0a5770f1d4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 18:58:34.586827   48088 system_pods.go:61] "etcd-default-k8s-diff-port-153528" [829f38ad-e4e4-434d-8da6-dde64deeb1ec] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0229 18:58:34.586837   48088 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-153528" [e27986e6-58a2-4acc-8a41-d4662ce0848d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0229 18:58:34.586853   48088 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-153528" [fb77dff9-141e-495f-9be8-f570f9387bf3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0229 18:58:34.586868   48088 system_pods.go:61] "kube-proxy-fwqsv" [af8cd0ff-71dd-44d4-8918-496e27654cbf] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0229 18:58:34.586887   48088 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-153528" [a325ec8e-4613-4447-87b1-c23b5b614352] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0229 18:58:34.586898   48088 system_pods.go:61] "metrics-server-57f55c9bc5-226bj" [80d7a4c6-e9b5-4324-8c61-489a874a9e79] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 18:58:34.586910   48088 system_pods.go:61] "storage-provisioner" [4270d9ce-329f-4531-9563-65a398f8b168] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0229 18:58:34.586919   48088 system_pods.go:74] duration metric: took 14.657543ms to wait for pod list to return data ...
	I0229 18:58:34.586932   48088 node_conditions.go:102] verifying NodePressure condition ...
	I0229 18:58:34.595109   48088 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 18:58:34.595144   48088 node_conditions.go:123] node cpu capacity is 2
	I0229 18:58:34.595158   48088 node_conditions.go:105] duration metric: took 8.219984ms to run NodePressure ...
	I0229 18:58:34.595179   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:58:34.946493   48088 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0229 18:58:34.951066   48088 kubeadm.go:787] kubelet initialised
	I0229 18:58:34.951088   48088 kubeadm.go:788] duration metric: took 4.569338ms waiting for restarted kubelet to initialise ...
	I0229 18:58:34.951098   48088 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 18:58:34.956637   48088 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-tr4nn" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:36.967075   48088 pod_ready.go:102] pod "coredns-5dd5756b68-tr4nn" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:35.215815   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:35.715203   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:36.215521   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:36.715525   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:37.215610   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:37.715474   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:38.215208   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:38.714993   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:39.215128   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:39.715944   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:36.584041   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:38.584897   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:38.722817   47515 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.20033311s)
	I0229 18:58:38.722904   47515 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0229 18:58:38.722923   47515 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.830873001s)
	I0229 18:58:38.722981   47515 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0229 18:58:38.723016   47515 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:58:38.722938   47515 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0229 18:58:38.723083   47515 ssh_runner.go:195] Run: which crictl
	I0229 18:58:38.723104   47515 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0229 18:58:38.722872   47515 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0: (2.145756086s)
	I0229 18:58:38.723163   47515 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0229 18:58:38.728297   47515 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:58:42.131683   47515 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.403360461s)
	I0229 18:58:42.131729   47515 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0229 18:58:42.131819   47515 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (3.408694108s)
	I0229 18:58:42.131839   47515 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0229 18:58:42.131823   47515 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0229 18:58:42.131862   47515 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0229 18:58:42.131903   47515 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0229 18:58:39.465588   48088 pod_ready.go:102] pod "coredns-5dd5756b68-tr4nn" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:41.473698   48088 pod_ready.go:102] pod "coredns-5dd5756b68-tr4nn" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:42.965252   48088 pod_ready.go:92] pod "coredns-5dd5756b68-tr4nn" in "kube-system" namespace has status "Ready":"True"
	I0229 18:58:42.965281   48088 pod_ready.go:81] duration metric: took 8.008622438s waiting for pod "coredns-5dd5756b68-tr4nn" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:42.965293   48088 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-153528" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:42.977865   48088 pod_ready.go:92] pod "etcd-default-k8s-diff-port-153528" in "kube-system" namespace has status "Ready":"True"
	I0229 18:58:42.977888   48088 pod_ready.go:81] duration metric: took 12.586144ms waiting for pod "etcd-default-k8s-diff-port-153528" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:42.977900   48088 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-153528" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:43.486518   48088 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-153528" in "kube-system" namespace has status "Ready":"True"
	I0229 18:58:43.486545   48088 pod_ready.go:81] duration metric: took 508.631346ms waiting for pod "kube-apiserver-default-k8s-diff-port-153528" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:43.486554   48088 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-153528" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:40.215679   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:40.715898   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:41.215271   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:41.715702   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:42.214943   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:42.715085   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:43.215196   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:43.715164   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:44.215580   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:44.715155   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:41.082209   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:43.089104   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:45.101973   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:43.991872   47515 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.859995098s)
	I0229 18:58:43.991921   47515 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0229 18:58:43.992104   47515 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.860178579s)
	I0229 18:58:43.992159   47515 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0229 18:58:43.992190   47515 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0229 18:58:43.992238   47515 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0229 18:58:45.454368   47515 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.462102352s)
	I0229 18:58:45.454407   47515 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0229 18:58:45.454436   47515 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0229 18:58:45.454567   47515 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0229 18:58:45.493014   48088 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-153528" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:46.493937   48088 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-153528" in "kube-system" namespace has status "Ready":"True"
	I0229 18:58:46.493969   48088 pod_ready.go:81] duration metric: took 3.007406763s waiting for pod "kube-controller-manager-default-k8s-diff-port-153528" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:46.493982   48088 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-fwqsv" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:46.499157   48088 pod_ready.go:92] pod "kube-proxy-fwqsv" in "kube-system" namespace has status "Ready":"True"
	I0229 18:58:46.499177   48088 pod_ready.go:81] duration metric: took 5.187224ms waiting for pod "kube-proxy-fwqsv" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:46.499188   48088 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-153528" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:48.006573   48088 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-153528" in "kube-system" namespace has status "Ready":"True"
	I0229 18:58:48.006600   48088 pod_ready.go:81] duration metric: took 1.507402889s waiting for pod "kube-scheduler-default-k8s-diff-port-153528" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:48.006612   48088 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace to be "Ready" ...
	I0229 18:58:45.215722   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:45.715879   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:46.215457   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:46.715123   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:47.216000   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:47.715056   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:48.215140   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:48.715448   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:49.215722   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:49.715058   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:47.586794   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:50.084118   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:48.118942   47515 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.664337971s)
	I0229 18:58:48.118983   47515 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0229 18:58:48.119010   47515 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0229 18:58:48.119086   47515 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0229 18:58:52.117429   47515 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.998319742s)
	I0229 18:58:52.117462   47515 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0229 18:58:52.117488   47515 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0229 18:58:52.117538   47515 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0229 18:58:50.015404   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:52.515203   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:50.214969   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:50.715535   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:51.215238   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:51.715704   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:52.215238   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:52.715897   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:53.215106   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:53.715753   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:54.215737   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:54.715449   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:52.084871   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:54.582435   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:53.079184   47515 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18259-6428/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0229 18:58:53.079224   47515 cache_images.go:123] Successfully loaded all cached images
	I0229 18:58:53.079231   47515 cache_images.go:92] LoadImages completed in 17.104746432s
	I0229 18:58:53.079303   47515 ssh_runner.go:195] Run: crio config
	I0229 18:58:53.126378   47515 cni.go:84] Creating CNI manager for ""
	I0229 18:58:53.126400   47515 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 18:58:53.126417   47515 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 18:58:53.126434   47515 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.72 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-247197 NodeName:no-preload-247197 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.72"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.72 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 18:58:53.126583   47515 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.72
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-247197"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.72
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.72"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 18:58:53.126643   47515 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-247197 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.72
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-247197 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 18:58:53.126692   47515 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0229 18:58:53.141044   47515 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 18:58:53.141117   47515 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 18:58:53.153167   47515 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (381 bytes)
	I0229 18:58:53.173724   47515 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0229 18:58:53.192645   47515 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0229 18:58:53.212004   47515 ssh_runner.go:195] Run: grep 192.168.50.72	control-plane.minikube.internal$ /etc/hosts
	I0229 18:58:53.216443   47515 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.72	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:58:53.233319   47515 certs.go:56] Setting up /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/no-preload-247197 for IP: 192.168.50.72
	I0229 18:58:53.233353   47515 certs.go:190] acquiring lock for shared ca certs: {Name:mk3505d2468da66ac20dc3ebf913782cecf1ba0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:58:53.233510   47515 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18259-6428/.minikube/ca.key
	I0229 18:58:53.233568   47515 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.key
	I0229 18:58:53.233680   47515 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/no-preload-247197/client.key
	I0229 18:58:53.233763   47515 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/no-preload-247197/apiserver.key.7c8fc674
	I0229 18:58:53.233803   47515 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/no-preload-247197/proxy-client.key
	I0229 18:58:53.233915   47515 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651.pem (1338 bytes)
	W0229 18:58:53.233942   47515 certs.go:433] ignoring /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651_empty.pem, impossibly tiny 0 bytes
	I0229 18:58:53.233948   47515 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem (1675 bytes)
	I0229 18:58:53.233971   47515 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem (1082 bytes)
	I0229 18:58:53.233991   47515 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem (1123 bytes)
	I0229 18:58:53.234011   47515 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem (1675 bytes)
	I0229 18:58:53.234050   47515 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem (1708 bytes)
	I0229 18:58:53.234710   47515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/no-preload-247197/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 18:58:53.264093   47515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/no-preload-247197/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0229 18:58:53.290793   47515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/no-preload-247197/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 18:58:53.319206   47515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/no-preload-247197/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 18:58:53.346074   47515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 18:58:53.373754   47515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0229 18:58:53.402222   47515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 18:58:53.430685   47515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0229 18:58:53.458589   47515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651.pem --> /usr/share/ca-certificates/13651.pem (1338 bytes)
	I0229 18:58:53.485553   47515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem --> /usr/share/ca-certificates/136512.pem (1708 bytes)
	I0229 18:58:53.513594   47515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 18:58:53.542588   47515 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 18:58:53.562935   47515 ssh_runner.go:195] Run: openssl version
	I0229 18:58:53.571313   47515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13651.pem && ln -fs /usr/share/ca-certificates/13651.pem /etc/ssl/certs/13651.pem"
	I0229 18:58:53.586708   47515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13651.pem
	I0229 18:58:53.592585   47515 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 17:49 /usr/share/ca-certificates/13651.pem
	I0229 18:58:53.592682   47515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13651.pem
	I0229 18:58:53.600135   47515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13651.pem /etc/ssl/certs/51391683.0"
	I0229 18:58:53.614410   47515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136512.pem && ln -fs /usr/share/ca-certificates/136512.pem /etc/ssl/certs/136512.pem"
	I0229 18:58:53.627733   47515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136512.pem
	I0229 18:58:53.632869   47515 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 17:49 /usr/share/ca-certificates/136512.pem
	I0229 18:58:53.632926   47515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136512.pem
	I0229 18:58:53.639973   47515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136512.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 18:58:53.654090   47515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 18:58:53.667714   47515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:58:53.672987   47515 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 17:40 /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:58:53.673046   47515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:58:53.679806   47515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 18:58:53.692846   47515 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 18:58:53.697764   47515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0229 18:58:53.704678   47515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0229 18:58:53.711070   47515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0229 18:58:53.717607   47515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0229 18:58:53.724048   47515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0229 18:58:53.731138   47515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0229 18:58:53.737875   47515 kubeadm.go:404] StartCluster: {Name:no-preload-247197 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-247197 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.72 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:58:53.737981   47515 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0229 18:58:53.738028   47515 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 18:58:53.777952   47515 cri.go:89] found id: ""
	I0229 18:58:53.778016   47515 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 18:58:53.790323   47515 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0229 18:58:53.790342   47515 kubeadm.go:636] restartCluster start
	I0229 18:58:53.790397   47515 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0229 18:58:53.801812   47515 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:53.803203   47515 kubeconfig.go:92] found "no-preload-247197" server: "https://192.168.50.72:8443"
	I0229 18:58:53.806252   47515 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0229 18:58:53.817542   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:58:53.817601   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:53.831702   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:54.318196   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:58:54.318261   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:54.332586   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:54.818521   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:58:54.818617   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:54.835279   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:55.317681   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:58:55.317760   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:55.334156   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:55.818654   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:58:55.818761   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:55.834435   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:56.317800   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:58:56.317923   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:56.333149   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:56.817667   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:58:56.817776   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:56.832497   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:57.318058   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:58:57.318173   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:57.332672   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:57.818372   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:58:57.818477   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:57.834669   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:55.015453   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:57.513580   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:55.215634   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:55.715221   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:56.215582   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:56.715580   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:57.215652   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:57.715281   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:58.215656   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:58.715759   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:59.216000   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:59.714984   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:56.583205   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:59.083761   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:58.318525   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:58:58.318595   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:58.334704   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:58.818249   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:58:58.818360   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:58.834221   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:59.318385   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:58:59.318489   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:59.334283   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:59.818167   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:58:59.818231   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:58:59.834310   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:59:00.317793   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:59:00.317904   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:59:00.334063   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:59:00.817623   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:59:00.817702   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:59:00.832855   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:59:01.318481   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:59:01.318569   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:59:01.333716   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:59:01.818312   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:59:01.818413   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:59:01.834094   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:59:02.317571   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:59:02.317680   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:59:02.332422   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:59:02.817947   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:59:02.818044   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:59:02.834339   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:59.514446   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:02.015881   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:00.215747   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:00.715123   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:01.214978   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:01.715726   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:02.215092   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:02.715148   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:03.215149   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:03.715717   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:04.215830   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:04.715275   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:01.084277   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:03.583278   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:03.318317   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:59:03.318410   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:59:03.334824   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:59:03.818569   47515 api_server.go:166] Checking apiserver status ...
	I0229 18:59:03.818652   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:59:03.834206   47515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:59:03.834235   47515 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0229 18:59:03.834244   47515 kubeadm.go:1135] stopping kube-system containers ...
	I0229 18:59:03.834255   47515 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0229 18:59:03.834306   47515 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 18:59:03.877464   47515 cri.go:89] found id: ""
	I0229 18:59:03.877543   47515 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0229 18:59:03.901093   47515 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 18:59:03.912185   47515 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 18:59:03.912237   47515 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 18:59:03.923685   47515 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0229 18:59:03.923706   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:59:04.037753   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:59:05.127681   47515 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.089896164s)
	I0229 18:59:05.127710   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:59:05.363326   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:59:05.447053   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:59:05.525183   47515 api_server.go:52] waiting for apiserver process to appear ...
	I0229 18:59:05.525276   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:06.026071   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:06.525747   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:07.026103   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:07.043681   47515 api_server.go:72] duration metric: took 1.518498943s to wait for apiserver process to appear ...
	I0229 18:59:07.043706   47515 api_server.go:88] waiting for apiserver healthz status ...
	I0229 18:59:07.043728   47515 api_server.go:253] Checking apiserver healthz at https://192.168.50.72:8443/healthz ...
	I0229 18:59:04.518296   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:07.014672   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:05.215563   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:05.715180   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:06.215014   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:06.715750   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:07.215911   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:07.715662   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:08.215895   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:08.715565   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:09.214999   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:09.215096   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:09.270645   47919 cri.go:89] found id: ""
	I0229 18:59:09.270672   47919 logs.go:276] 0 containers: []
	W0229 18:59:09.270683   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:09.270690   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:09.270748   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:09.335492   47919 cri.go:89] found id: ""
	I0229 18:59:09.335519   47919 logs.go:276] 0 containers: []
	W0229 18:59:09.335530   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:09.335546   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:09.335627   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:09.405117   47919 cri.go:89] found id: ""
	I0229 18:59:09.405150   47919 logs.go:276] 0 containers: []
	W0229 18:59:09.405160   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:09.405167   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:09.405233   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:09.451096   47919 cri.go:89] found id: ""
	I0229 18:59:09.451128   47919 logs.go:276] 0 containers: []
	W0229 18:59:09.451140   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:09.451147   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:09.451226   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:09.498951   47919 cri.go:89] found id: ""
	I0229 18:59:09.498981   47919 logs.go:276] 0 containers: []
	W0229 18:59:09.499007   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:09.499014   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:09.499091   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:09.544447   47919 cri.go:89] found id: ""
	I0229 18:59:09.544474   47919 logs.go:276] 0 containers: []
	W0229 18:59:09.544484   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:09.544491   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:09.544548   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:09.597735   47919 cri.go:89] found id: ""
	I0229 18:59:09.597764   47919 logs.go:276] 0 containers: []
	W0229 18:59:09.597775   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:09.597782   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:09.597836   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:09.648458   47919 cri.go:89] found id: ""
	I0229 18:59:09.648480   47919 logs.go:276] 0 containers: []
	W0229 18:59:09.648489   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:09.648499   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:09.648515   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:09.700744   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:09.700792   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:09.717303   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:09.717332   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:09.845966   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:09.845984   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:09.845995   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:09.913069   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:09.913106   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:05.583650   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:07.584155   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:09.584605   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:09.527960   47515 api_server.go:279] https://192.168.50.72:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 18:59:09.528037   47515 api_server.go:103] status: https://192.168.50.72:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 18:59:09.528059   47515 api_server.go:253] Checking apiserver healthz at https://192.168.50.72:8443/healthz ...
	I0229 18:59:09.571679   47515 api_server.go:279] https://192.168.50.72:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 18:59:09.571713   47515 api_server.go:103] status: https://192.168.50.72:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 18:59:09.571738   47515 api_server.go:253] Checking apiserver healthz at https://192.168.50.72:8443/healthz ...
	I0229 18:59:09.647733   47515 api_server.go:279] https://192.168.50.72:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 18:59:09.647780   47515 api_server.go:103] status: https://192.168.50.72:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 18:59:10.044646   47515 api_server.go:253] Checking apiserver healthz at https://192.168.50.72:8443/healthz ...
	I0229 18:59:10.049310   47515 api_server.go:279] https://192.168.50.72:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 18:59:10.049347   47515 api_server.go:103] status: https://192.168.50.72:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 18:59:10.543904   47515 api_server.go:253] Checking apiserver healthz at https://192.168.50.72:8443/healthz ...
	I0229 18:59:10.551014   47515 api_server.go:279] https://192.168.50.72:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 18:59:10.551055   47515 api_server.go:103] status: https://192.168.50.72:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 18:59:11.044658   47515 api_server.go:253] Checking apiserver healthz at https://192.168.50.72:8443/healthz ...
	I0229 18:59:11.051170   47515 api_server.go:279] https://192.168.50.72:8443/healthz returned 200:
	ok
	I0229 18:59:11.059048   47515 api_server.go:141] control plane version: v1.29.0-rc.2
	I0229 18:59:11.059076   47515 api_server.go:131] duration metric: took 4.015363545s to wait for apiserver health ...
	I0229 18:59:11.059085   47515 cni.go:84] Creating CNI manager for ""
	I0229 18:59:11.059092   47515 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 18:59:11.060915   47515 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 18:59:11.062158   47515 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 18:59:11.076961   47515 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 18:59:11.109344   47515 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 18:59:11.129625   47515 system_pods.go:59] 8 kube-system pods found
	I0229 18:59:11.129659   47515 system_pods.go:61] "coredns-76f75df574-dfrds" [ab7ce7e3-0532-48a1-9177-00e554d7e5af] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 18:59:11.129668   47515 system_pods.go:61] "etcd-no-preload-247197" [e37e6d4c-5039-484e-98af-553ade3ba60f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0229 18:59:11.129679   47515 system_pods.go:61] "kube-apiserver-no-preload-247197" [933648a9-115f-4c2a-b699-48ef7409331c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0229 18:59:11.129692   47515 system_pods.go:61] "kube-controller-manager-no-preload-247197" [b87a4a06-8a47-4cdf-a5e7-85f967e6332a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0229 18:59:11.129699   47515 system_pods.go:61] "kube-proxy-hjm9j" [a2e6ec66-78d9-4637-bb47-3f954969813b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0229 18:59:11.129707   47515 system_pods.go:61] "kube-scheduler-no-preload-247197" [cc52dc2c-cbe0-4cf0-8a2d-99a6f1880f6d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0229 18:59:11.129717   47515 system_pods.go:61] "metrics-server-57f55c9bc5-ggf8f" [dd2986a2-20a9-499c-805a-3c28819ff2f7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 18:59:11.129726   47515 system_pods.go:61] "storage-provisioner" [22f64d5e-b947-43ed-9842-cb6e252fd4a0] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0229 18:59:11.129733   47515 system_pods.go:74] duration metric: took 20.366108ms to wait for pod list to return data ...
	I0229 18:59:11.129742   47515 node_conditions.go:102] verifying NodePressure condition ...
	I0229 18:59:11.133259   47515 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 18:59:11.133282   47515 node_conditions.go:123] node cpu capacity is 2
	I0229 18:59:11.133294   47515 node_conditions.go:105] duration metric: took 3.545943ms to run NodePressure ...
	I0229 18:59:11.133313   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:59:11.618536   47515 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0229 18:59:11.625628   47515 kubeadm.go:787] kubelet initialised
	I0229 18:59:11.625649   47515 kubeadm.go:788] duration metric: took 7.089584ms waiting for restarted kubelet to initialise ...
	I0229 18:59:11.625661   47515 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 18:59:11.641122   47515 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-dfrds" in "kube-system" namespace to be "Ready" ...
	I0229 18:59:09.515059   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:11.515286   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:14.013214   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:12.465591   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:12.479774   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:12.479825   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:12.517591   47919 cri.go:89] found id: ""
	I0229 18:59:12.517620   47919 logs.go:276] 0 containers: []
	W0229 18:59:12.517630   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:12.517637   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:12.517693   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:12.560735   47919 cri.go:89] found id: ""
	I0229 18:59:12.560758   47919 logs.go:276] 0 containers: []
	W0229 18:59:12.560769   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:12.560776   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:12.560843   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:12.600002   47919 cri.go:89] found id: ""
	I0229 18:59:12.600025   47919 logs.go:276] 0 containers: []
	W0229 18:59:12.600033   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:12.600043   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:12.600088   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:12.639223   47919 cri.go:89] found id: ""
	I0229 18:59:12.639252   47919 logs.go:276] 0 containers: []
	W0229 18:59:12.639264   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:12.639272   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:12.639339   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:12.682491   47919 cri.go:89] found id: ""
	I0229 18:59:12.682514   47919 logs.go:276] 0 containers: []
	W0229 18:59:12.682524   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:12.682531   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:12.682590   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:12.720669   47919 cri.go:89] found id: ""
	I0229 18:59:12.720693   47919 logs.go:276] 0 containers: []
	W0229 18:59:12.720700   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:12.720706   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:12.720773   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:12.764880   47919 cri.go:89] found id: ""
	I0229 18:59:12.764908   47919 logs.go:276] 0 containers: []
	W0229 18:59:12.764919   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:12.764926   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:12.765011   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:12.808987   47919 cri.go:89] found id: ""
	I0229 18:59:12.809019   47919 logs.go:276] 0 containers: []
	W0229 18:59:12.809052   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:12.809064   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:12.809079   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:12.866228   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:12.866263   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:12.886698   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:12.886729   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:12.963092   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:12.963116   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:12.963129   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:13.034485   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:13.034524   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:11.586793   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:14.081742   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:13.648688   47515 pod_ready.go:102] pod "coredns-76f75df574-dfrds" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:15.648876   47515 pod_ready.go:102] pod "coredns-76f75df574-dfrds" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:17.649478   47515 pod_ready.go:102] pod "coredns-76f75df574-dfrds" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:16.015395   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:18.015919   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:15.588224   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:15.603293   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:15.603368   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:15.648746   47919 cri.go:89] found id: ""
	I0229 18:59:15.648771   47919 logs.go:276] 0 containers: []
	W0229 18:59:15.648781   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:15.648788   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:15.648850   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:15.686420   47919 cri.go:89] found id: ""
	I0229 18:59:15.686447   47919 logs.go:276] 0 containers: []
	W0229 18:59:15.686463   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:15.686470   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:15.686533   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:15.729410   47919 cri.go:89] found id: ""
	I0229 18:59:15.729439   47919 logs.go:276] 0 containers: []
	W0229 18:59:15.729451   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:15.729458   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:15.729526   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:15.768078   47919 cri.go:89] found id: ""
	I0229 18:59:15.768108   47919 logs.go:276] 0 containers: []
	W0229 18:59:15.768119   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:15.768127   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:15.768188   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:15.806725   47919 cri.go:89] found id: ""
	I0229 18:59:15.806753   47919 logs.go:276] 0 containers: []
	W0229 18:59:15.806765   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:15.806772   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:15.806845   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:15.848566   47919 cri.go:89] found id: ""
	I0229 18:59:15.848593   47919 logs.go:276] 0 containers: []
	W0229 18:59:15.848600   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:15.848606   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:15.848657   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:15.888907   47919 cri.go:89] found id: ""
	I0229 18:59:15.888930   47919 logs.go:276] 0 containers: []
	W0229 18:59:15.888942   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:15.888948   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:15.889009   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:15.926653   47919 cri.go:89] found id: ""
	I0229 18:59:15.926686   47919 logs.go:276] 0 containers: []
	W0229 18:59:15.926708   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:15.926729   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:15.926746   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:15.976773   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:15.976812   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:15.995440   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:15.995477   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:16.103753   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:16.103774   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:16.103786   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:16.188282   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:16.188319   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:18.733451   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:18.748528   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:18.748607   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:18.785998   47919 cri.go:89] found id: ""
	I0229 18:59:18.786055   47919 logs.go:276] 0 containers: []
	W0229 18:59:18.786069   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:18.786078   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:18.786144   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:18.824234   47919 cri.go:89] found id: ""
	I0229 18:59:18.824260   47919 logs.go:276] 0 containers: []
	W0229 18:59:18.824270   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:18.824277   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:18.824339   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:18.868586   47919 cri.go:89] found id: ""
	I0229 18:59:18.868615   47919 logs.go:276] 0 containers: []
	W0229 18:59:18.868626   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:18.868633   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:18.868696   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:18.912622   47919 cri.go:89] found id: ""
	I0229 18:59:18.912647   47919 logs.go:276] 0 containers: []
	W0229 18:59:18.912655   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:18.912661   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:18.912708   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:18.952001   47919 cri.go:89] found id: ""
	I0229 18:59:18.952029   47919 logs.go:276] 0 containers: []
	W0229 18:59:18.952040   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:18.952047   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:18.952108   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:18.993085   47919 cri.go:89] found id: ""
	I0229 18:59:18.993130   47919 logs.go:276] 0 containers: []
	W0229 18:59:18.993140   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:18.993148   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:18.993209   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:19.041498   47919 cri.go:89] found id: ""
	I0229 18:59:19.041524   47919 logs.go:276] 0 containers: []
	W0229 18:59:19.041536   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:19.041543   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:19.041601   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:19.099107   47919 cri.go:89] found id: ""
	I0229 18:59:19.099132   47919 logs.go:276] 0 containers: []
	W0229 18:59:19.099143   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:19.099153   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:19.099169   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:19.158578   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:19.158615   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:19.173561   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:19.173590   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:19.248498   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:19.248524   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:19.248540   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:19.326904   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:19.326939   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:16.085349   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:18.582796   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:20.148468   47515 pod_ready.go:102] pod "coredns-76f75df574-dfrds" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:21.648188   47515 pod_ready.go:92] pod "coredns-76f75df574-dfrds" in "kube-system" namespace has status "Ready":"True"
	I0229 18:59:21.648214   47515 pod_ready.go:81] duration metric: took 10.0070638s waiting for pod "coredns-76f75df574-dfrds" in "kube-system" namespace to be "Ready" ...
	I0229 18:59:21.648225   47515 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-247197" in "kube-system" namespace to be "Ready" ...
	I0229 18:59:20.514234   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:22.514669   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:21.877087   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:21.892919   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:21.892976   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:21.931119   47919 cri.go:89] found id: ""
	I0229 18:59:21.931147   47919 logs.go:276] 0 containers: []
	W0229 18:59:21.931159   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:21.931167   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:21.931227   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:21.971884   47919 cri.go:89] found id: ""
	I0229 18:59:21.971908   47919 logs.go:276] 0 containers: []
	W0229 18:59:21.971916   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:21.971921   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:21.971975   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:22.019170   47919 cri.go:89] found id: ""
	I0229 18:59:22.019206   47919 logs.go:276] 0 containers: []
	W0229 18:59:22.019216   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:22.019232   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:22.019311   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:22.078057   47919 cri.go:89] found id: ""
	I0229 18:59:22.078083   47919 logs.go:276] 0 containers: []
	W0229 18:59:22.078093   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:22.078100   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:22.078162   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:22.128112   47919 cri.go:89] found id: ""
	I0229 18:59:22.128141   47919 logs.go:276] 0 containers: []
	W0229 18:59:22.128151   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:22.128157   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:22.128214   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:22.171354   47919 cri.go:89] found id: ""
	I0229 18:59:22.171382   47919 logs.go:276] 0 containers: []
	W0229 18:59:22.171393   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:22.171400   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:22.171466   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:22.225620   47919 cri.go:89] found id: ""
	I0229 18:59:22.225642   47919 logs.go:276] 0 containers: []
	W0229 18:59:22.225651   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:22.225658   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:22.225718   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:22.271291   47919 cri.go:89] found id: ""
	I0229 18:59:22.271320   47919 logs.go:276] 0 containers: []
	W0229 18:59:22.271332   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:22.271343   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:22.271358   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:22.336735   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:22.336765   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:22.354397   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:22.354425   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:22.432691   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:22.432713   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:22.432727   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:22.520239   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:22.520268   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:20.587039   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:23.084979   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:25.086225   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:23.657675   47515 pod_ready.go:102] pod "etcd-no-preload-247197" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:25.656013   47515 pod_ready.go:92] pod "etcd-no-preload-247197" in "kube-system" namespace has status "Ready":"True"
	I0229 18:59:25.656050   47515 pod_ready.go:81] duration metric: took 4.007810687s waiting for pod "etcd-no-preload-247197" in "kube-system" namespace to be "Ready" ...
	I0229 18:59:25.656064   47515 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-247197" in "kube-system" namespace to be "Ready" ...
	I0229 18:59:25.661235   47515 pod_ready.go:92] pod "kube-apiserver-no-preload-247197" in "kube-system" namespace has status "Ready":"True"
	I0229 18:59:25.661263   47515 pod_ready.go:81] duration metric: took 5.191999ms waiting for pod "kube-apiserver-no-preload-247197" in "kube-system" namespace to be "Ready" ...
	I0229 18:59:25.661273   47515 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-247197" in "kube-system" namespace to be "Ready" ...
	I0229 18:59:25.666649   47515 pod_ready.go:92] pod "kube-controller-manager-no-preload-247197" in "kube-system" namespace has status "Ready":"True"
	I0229 18:59:25.666672   47515 pod_ready.go:81] duration metric: took 5.388774ms waiting for pod "kube-controller-manager-no-preload-247197" in "kube-system" namespace to be "Ready" ...
	I0229 18:59:25.666680   47515 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-hjm9j" in "kube-system" namespace to be "Ready" ...
	I0229 18:59:25.672042   47515 pod_ready.go:92] pod "kube-proxy-hjm9j" in "kube-system" namespace has status "Ready":"True"
	I0229 18:59:25.672067   47515 pod_ready.go:81] duration metric: took 5.380771ms waiting for pod "kube-proxy-hjm9j" in "kube-system" namespace to be "Ready" ...
	I0229 18:59:25.672076   47515 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-247197" in "kube-system" namespace to be "Ready" ...
	I0229 18:59:25.676980   47515 pod_ready.go:92] pod "kube-scheduler-no-preload-247197" in "kube-system" namespace has status "Ready":"True"
	I0229 18:59:25.677001   47515 pod_ready.go:81] duration metric: took 4.919332ms waiting for pod "kube-scheduler-no-preload-247197" in "kube-system" namespace to be "Ready" ...
	I0229 18:59:25.677013   47515 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace to be "Ready" ...
	I0229 18:59:27.684865   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:25.017772   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:27.513975   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:25.073478   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:25.105197   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:25.105262   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:25.165700   47919 cri.go:89] found id: ""
	I0229 18:59:25.165728   47919 logs.go:276] 0 containers: []
	W0229 18:59:25.165737   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:25.165744   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:25.165810   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:25.210864   47919 cri.go:89] found id: ""
	I0229 18:59:25.210892   47919 logs.go:276] 0 containers: []
	W0229 18:59:25.210904   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:25.210911   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:25.210974   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:25.257785   47919 cri.go:89] found id: ""
	I0229 18:59:25.257810   47919 logs.go:276] 0 containers: []
	W0229 18:59:25.257820   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:25.257827   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:25.257888   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:25.299816   47919 cri.go:89] found id: ""
	I0229 18:59:25.299844   47919 logs.go:276] 0 containers: []
	W0229 18:59:25.299855   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:25.299863   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:25.299933   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:25.339711   47919 cri.go:89] found id: ""
	I0229 18:59:25.339737   47919 logs.go:276] 0 containers: []
	W0229 18:59:25.339746   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:25.339751   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:25.339805   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:25.381107   47919 cri.go:89] found id: ""
	I0229 18:59:25.381135   47919 logs.go:276] 0 containers: []
	W0229 18:59:25.381145   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:25.381153   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:25.381211   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:25.429029   47919 cri.go:89] found id: ""
	I0229 18:59:25.429054   47919 logs.go:276] 0 containers: []
	W0229 18:59:25.429064   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:25.429071   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:25.429130   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:25.470598   47919 cri.go:89] found id: ""
	I0229 18:59:25.470629   47919 logs.go:276] 0 containers: []
	W0229 18:59:25.470637   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:25.470644   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:25.470655   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:25.516439   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:25.516476   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:25.569170   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:25.569204   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:25.584405   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:25.584430   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:25.663650   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:25.663671   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:25.663686   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:28.248036   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:28.263367   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:28.263440   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:28.302232   47919 cri.go:89] found id: ""
	I0229 18:59:28.302259   47919 logs.go:276] 0 containers: []
	W0229 18:59:28.302273   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:28.302281   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:28.302340   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:28.345147   47919 cri.go:89] found id: ""
	I0229 18:59:28.345173   47919 logs.go:276] 0 containers: []
	W0229 18:59:28.345185   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:28.345192   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:28.345250   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:28.383671   47919 cri.go:89] found id: ""
	I0229 18:59:28.383690   47919 logs.go:276] 0 containers: []
	W0229 18:59:28.383702   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:28.383709   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:28.383762   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:28.423737   47919 cri.go:89] found id: ""
	I0229 18:59:28.423762   47919 logs.go:276] 0 containers: []
	W0229 18:59:28.423769   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:28.423774   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:28.423826   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:28.465679   47919 cri.go:89] found id: ""
	I0229 18:59:28.465705   47919 logs.go:276] 0 containers: []
	W0229 18:59:28.465715   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:28.465723   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:28.465775   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:28.509703   47919 cri.go:89] found id: ""
	I0229 18:59:28.509731   47919 logs.go:276] 0 containers: []
	W0229 18:59:28.509742   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:28.509754   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:28.509826   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:28.549981   47919 cri.go:89] found id: ""
	I0229 18:59:28.550010   47919 logs.go:276] 0 containers: []
	W0229 18:59:28.550021   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:28.550027   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:28.550093   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:28.589802   47919 cri.go:89] found id: ""
	I0229 18:59:28.589827   47919 logs.go:276] 0 containers: []
	W0229 18:59:28.589834   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:28.589841   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:28.589853   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:28.670623   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:28.670644   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:28.670655   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:28.765451   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:28.765484   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:28.821538   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:28.821571   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:28.889401   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:28.889438   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:27.583470   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:29.584344   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:30.184242   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:32.184867   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:29.514804   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:31.516473   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:34.013518   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:31.406911   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:31.422464   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:31.422541   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:31.460701   47919 cri.go:89] found id: ""
	I0229 18:59:31.460744   47919 logs.go:276] 0 containers: []
	W0229 18:59:31.460755   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:31.460762   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:31.460822   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:31.506966   47919 cri.go:89] found id: ""
	I0229 18:59:31.506996   47919 logs.go:276] 0 containers: []
	W0229 18:59:31.507007   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:31.507013   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:31.507088   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:31.542582   47919 cri.go:89] found id: ""
	I0229 18:59:31.542611   47919 logs.go:276] 0 containers: []
	W0229 18:59:31.542623   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:31.542631   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:31.542693   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:31.585470   47919 cri.go:89] found id: ""
	I0229 18:59:31.585496   47919 logs.go:276] 0 containers: []
	W0229 18:59:31.585508   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:31.585516   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:31.585574   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:31.627751   47919 cri.go:89] found id: ""
	I0229 18:59:31.627785   47919 logs.go:276] 0 containers: []
	W0229 18:59:31.627797   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:31.627805   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:31.627864   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:31.665988   47919 cri.go:89] found id: ""
	I0229 18:59:31.666009   47919 logs.go:276] 0 containers: []
	W0229 18:59:31.666017   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:31.666023   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:31.666081   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:31.712553   47919 cri.go:89] found id: ""
	I0229 18:59:31.712583   47919 logs.go:276] 0 containers: []
	W0229 18:59:31.712597   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:31.712603   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:31.712659   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:31.749904   47919 cri.go:89] found id: ""
	I0229 18:59:31.749944   47919 logs.go:276] 0 containers: []
	W0229 18:59:31.749954   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:31.749965   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:31.749980   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:31.843949   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:31.843992   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:31.898158   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:31.898186   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:31.949798   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:31.949831   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:31.965666   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:31.965697   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:32.040368   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:34.541417   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:34.558286   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:34.558345   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:34.602083   47919 cri.go:89] found id: ""
	I0229 18:59:34.602113   47919 logs.go:276] 0 containers: []
	W0229 18:59:34.602123   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:34.602130   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:34.602200   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:34.647108   47919 cri.go:89] found id: ""
	I0229 18:59:34.647136   47919 logs.go:276] 0 containers: []
	W0229 18:59:34.647146   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:34.647151   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:34.647220   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:34.692920   47919 cri.go:89] found id: ""
	I0229 18:59:34.692942   47919 logs.go:276] 0 containers: []
	W0229 18:59:34.692950   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:34.692956   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:34.693000   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:34.739367   47919 cri.go:89] found id: ""
	I0229 18:59:34.739397   47919 logs.go:276] 0 containers: []
	W0229 18:59:34.739408   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:34.739416   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:34.739478   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:34.794083   47919 cri.go:89] found id: ""
	I0229 18:59:34.794106   47919 logs.go:276] 0 containers: []
	W0229 18:59:34.794114   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:34.794120   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:34.794179   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:34.865371   47919 cri.go:89] found id: ""
	I0229 18:59:34.865400   47919 logs.go:276] 0 containers: []
	W0229 18:59:34.865412   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:34.865419   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:34.865476   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:34.906957   47919 cri.go:89] found id: ""
	I0229 18:59:34.906986   47919 logs.go:276] 0 containers: []
	W0229 18:59:34.906994   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:34.906999   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:34.907063   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:31.584743   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:34.085375   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:34.684397   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:37.183641   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:36.015759   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:38.514451   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:34.948548   47919 cri.go:89] found id: ""
	I0229 18:59:34.948570   47919 logs.go:276] 0 containers: []
	W0229 18:59:34.948577   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:34.948586   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:34.948598   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:35.036558   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:35.036594   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:35.080137   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:35.080169   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:35.130408   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:35.130436   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:35.148306   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:35.148332   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:35.222648   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:37.723158   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:37.741809   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:37.741885   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:37.787147   47919 cri.go:89] found id: ""
	I0229 18:59:37.787177   47919 logs.go:276] 0 containers: []
	W0229 18:59:37.787184   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:37.787192   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:37.787249   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:37.835589   47919 cri.go:89] found id: ""
	I0229 18:59:37.835613   47919 logs.go:276] 0 containers: []
	W0229 18:59:37.835623   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:37.835630   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:37.835687   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:37.895088   47919 cri.go:89] found id: ""
	I0229 18:59:37.895118   47919 logs.go:276] 0 containers: []
	W0229 18:59:37.895130   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:37.895137   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:37.895194   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:37.940837   47919 cri.go:89] found id: ""
	I0229 18:59:37.940867   47919 logs.go:276] 0 containers: []
	W0229 18:59:37.940878   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:37.940886   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:37.940946   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:37.989155   47919 cri.go:89] found id: ""
	I0229 18:59:37.989183   47919 logs.go:276] 0 containers: []
	W0229 18:59:37.989194   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:37.989203   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:37.989267   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:38.026517   47919 cri.go:89] found id: ""
	I0229 18:59:38.026543   47919 logs.go:276] 0 containers: []
	W0229 18:59:38.026553   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:38.026560   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:38.026623   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:38.063299   47919 cri.go:89] found id: ""
	I0229 18:59:38.063328   47919 logs.go:276] 0 containers: []
	W0229 18:59:38.063340   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:38.063347   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:38.063393   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:38.106278   47919 cri.go:89] found id: ""
	I0229 18:59:38.106298   47919 logs.go:276] 0 containers: []
	W0229 18:59:38.106305   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:38.106315   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:38.106330   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:38.182985   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:38.183008   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:38.183038   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:38.260280   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:38.260312   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:38.303648   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:38.303678   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:38.352889   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:38.352931   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:36.583258   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:38.583878   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:39.185221   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:41.684957   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:40.515303   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:43.017529   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:40.870416   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:40.885618   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:40.885692   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:40.924088   47919 cri.go:89] found id: ""
	I0229 18:59:40.924115   47919 logs.go:276] 0 containers: []
	W0229 18:59:40.924126   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:40.924133   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:40.924192   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:40.959485   47919 cri.go:89] found id: ""
	I0229 18:59:40.959513   47919 logs.go:276] 0 containers: []
	W0229 18:59:40.959524   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:40.959532   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:40.959593   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:41.009453   47919 cri.go:89] found id: ""
	I0229 18:59:41.009478   47919 logs.go:276] 0 containers: []
	W0229 18:59:41.009489   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:41.009496   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:41.009552   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:41.052894   47919 cri.go:89] found id: ""
	I0229 18:59:41.052922   47919 logs.go:276] 0 containers: []
	W0229 18:59:41.052933   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:41.052940   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:41.052997   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:41.098299   47919 cri.go:89] found id: ""
	I0229 18:59:41.098328   47919 logs.go:276] 0 containers: []
	W0229 18:59:41.098338   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:41.098345   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:41.098460   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:41.138287   47919 cri.go:89] found id: ""
	I0229 18:59:41.138313   47919 logs.go:276] 0 containers: []
	W0229 18:59:41.138324   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:41.138333   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:41.138395   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:41.176482   47919 cri.go:89] found id: ""
	I0229 18:59:41.176512   47919 logs.go:276] 0 containers: []
	W0229 18:59:41.176522   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:41.176529   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:41.176598   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:41.215284   47919 cri.go:89] found id: ""
	I0229 18:59:41.215307   47919 logs.go:276] 0 containers: []
	W0229 18:59:41.215317   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:41.215327   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:41.215342   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:41.230954   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:41.230982   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:41.313672   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:41.313696   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:41.313713   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:41.393574   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:41.393610   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:41.443384   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:41.443422   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:43.994323   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:44.008821   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:44.008892   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:44.050088   47919 cri.go:89] found id: ""
	I0229 18:59:44.050116   47919 logs.go:276] 0 containers: []
	W0229 18:59:44.050124   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:44.050130   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:44.050207   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:44.089721   47919 cri.go:89] found id: ""
	I0229 18:59:44.089749   47919 logs.go:276] 0 containers: []
	W0229 18:59:44.089756   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:44.089762   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:44.089818   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:44.132366   47919 cri.go:89] found id: ""
	I0229 18:59:44.132398   47919 logs.go:276] 0 containers: []
	W0229 18:59:44.132407   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:44.132412   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:44.132468   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:44.173568   47919 cri.go:89] found id: ""
	I0229 18:59:44.173591   47919 logs.go:276] 0 containers: []
	W0229 18:59:44.173598   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:44.173604   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:44.173661   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:44.214660   47919 cri.go:89] found id: ""
	I0229 18:59:44.214683   47919 logs.go:276] 0 containers: []
	W0229 18:59:44.214691   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:44.214696   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:44.214747   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:44.254355   47919 cri.go:89] found id: ""
	I0229 18:59:44.254386   47919 logs.go:276] 0 containers: []
	W0229 18:59:44.254397   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:44.254405   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:44.254464   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:44.293548   47919 cri.go:89] found id: ""
	I0229 18:59:44.293573   47919 logs.go:276] 0 containers: []
	W0229 18:59:44.293584   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:44.293591   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:44.293652   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:44.333335   47919 cri.go:89] found id: ""
	I0229 18:59:44.333361   47919 logs.go:276] 0 containers: []
	W0229 18:59:44.333372   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:44.333383   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:44.333398   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:44.348941   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:44.348973   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:44.419949   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:44.419968   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:44.419982   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:44.503445   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:44.503479   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:44.558694   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:44.558728   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:40.584127   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:43.084271   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:43.685573   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:46.184467   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:45.513896   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:47.514467   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:47.129362   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:47.145410   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:47.145483   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:47.194037   47919 cri.go:89] found id: ""
	I0229 18:59:47.194073   47919 logs.go:276] 0 containers: []
	W0229 18:59:47.194092   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:47.194100   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:47.194160   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:47.232500   47919 cri.go:89] found id: ""
	I0229 18:59:47.232528   47919 logs.go:276] 0 containers: []
	W0229 18:59:47.232559   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:47.232568   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:47.232634   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:47.271452   47919 cri.go:89] found id: ""
	I0229 18:59:47.271485   47919 logs.go:276] 0 containers: []
	W0229 18:59:47.271494   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:47.271501   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:47.271561   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:47.313482   47919 cri.go:89] found id: ""
	I0229 18:59:47.313509   47919 logs.go:276] 0 containers: []
	W0229 18:59:47.313520   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:47.313527   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:47.313590   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:47.354958   47919 cri.go:89] found id: ""
	I0229 18:59:47.354988   47919 logs.go:276] 0 containers: []
	W0229 18:59:47.354996   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:47.355001   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:47.355092   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:47.393312   47919 cri.go:89] found id: ""
	I0229 18:59:47.393338   47919 logs.go:276] 0 containers: []
	W0229 18:59:47.393349   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:47.393356   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:47.393415   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:47.431370   47919 cri.go:89] found id: ""
	I0229 18:59:47.431396   47919 logs.go:276] 0 containers: []
	W0229 18:59:47.431406   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:47.431413   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:47.431471   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:47.471659   47919 cri.go:89] found id: ""
	I0229 18:59:47.471683   47919 logs.go:276] 0 containers: []
	W0229 18:59:47.471692   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:47.471702   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:47.471715   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:47.530365   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:47.530405   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:47.558874   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:47.558903   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:47.644009   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:47.644033   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:47.644047   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:47.730063   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:47.730095   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:45.583524   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:47.585620   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:50.083189   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:48.684211   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:50.686885   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:49.514667   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:52.014092   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:50.272945   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:50.288718   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:50.288796   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:50.331460   47919 cri.go:89] found id: ""
	I0229 18:59:50.331482   47919 logs.go:276] 0 containers: []
	W0229 18:59:50.331489   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:50.331495   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:50.331543   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:50.374960   47919 cri.go:89] found id: ""
	I0229 18:59:50.374989   47919 logs.go:276] 0 containers: []
	W0229 18:59:50.375000   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:50.375006   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:50.375076   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:50.415073   47919 cri.go:89] found id: ""
	I0229 18:59:50.415095   47919 logs.go:276] 0 containers: []
	W0229 18:59:50.415102   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:50.415107   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:50.415157   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:50.452511   47919 cri.go:89] found id: ""
	I0229 18:59:50.452554   47919 logs.go:276] 0 containers: []
	W0229 18:59:50.452563   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:50.452568   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:50.452612   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:50.498103   47919 cri.go:89] found id: ""
	I0229 18:59:50.498125   47919 logs.go:276] 0 containers: []
	W0229 18:59:50.498132   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:50.498137   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:50.498193   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:50.545366   47919 cri.go:89] found id: ""
	I0229 18:59:50.545397   47919 logs.go:276] 0 containers: []
	W0229 18:59:50.545409   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:50.545417   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:50.545487   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:50.608215   47919 cri.go:89] found id: ""
	I0229 18:59:50.608239   47919 logs.go:276] 0 containers: []
	W0229 18:59:50.608250   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:50.608257   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:50.608314   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:50.660835   47919 cri.go:89] found id: ""
	I0229 18:59:50.660861   47919 logs.go:276] 0 containers: []
	W0229 18:59:50.660881   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:50.660892   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:50.660907   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:50.749671   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:50.749712   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:50.797567   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:50.797595   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:50.848022   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:50.848059   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:50.862797   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:50.862820   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:50.934682   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:53.435804   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:53.451364   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:53.451440   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:53.500680   47919 cri.go:89] found id: ""
	I0229 18:59:53.500706   47919 logs.go:276] 0 containers: []
	W0229 18:59:53.500717   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:53.500744   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:53.500797   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:53.565306   47919 cri.go:89] found id: ""
	I0229 18:59:53.565334   47919 logs.go:276] 0 containers: []
	W0229 18:59:53.565344   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:53.565351   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:53.565410   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:53.631438   47919 cri.go:89] found id: ""
	I0229 18:59:53.631461   47919 logs.go:276] 0 containers: []
	W0229 18:59:53.631479   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:53.631486   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:53.631554   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:53.679482   47919 cri.go:89] found id: ""
	I0229 18:59:53.679506   47919 logs.go:276] 0 containers: []
	W0229 18:59:53.679516   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:53.679524   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:53.679580   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:53.722098   47919 cri.go:89] found id: ""
	I0229 18:59:53.722125   47919 logs.go:276] 0 containers: []
	W0229 18:59:53.722135   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:53.722142   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:53.722211   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:53.761804   47919 cri.go:89] found id: ""
	I0229 18:59:53.761838   47919 logs.go:276] 0 containers: []
	W0229 18:59:53.761849   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:53.761858   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:53.761942   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:53.806109   47919 cri.go:89] found id: ""
	I0229 18:59:53.806137   47919 logs.go:276] 0 containers: []
	W0229 18:59:53.806149   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:53.806157   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:53.806219   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:53.856794   47919 cri.go:89] found id: ""
	I0229 18:59:53.856823   47919 logs.go:276] 0 containers: []
	W0229 18:59:53.856831   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:53.856839   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:53.856849   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:53.908216   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:53.908252   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:53.923999   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:53.924038   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:54.000750   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:54.000772   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:54.000783   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:54.086840   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:54.086870   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:52.083751   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:54.586556   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:53.184426   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:55.683893   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:57.685129   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:54.513193   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:56.515925   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:59.013745   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:56.630728   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:56.647368   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:56.647440   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:56.693706   47919 cri.go:89] found id: ""
	I0229 18:59:56.693726   47919 logs.go:276] 0 containers: []
	W0229 18:59:56.693733   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:56.693738   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:56.693780   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:56.733377   47919 cri.go:89] found id: ""
	I0229 18:59:56.733404   47919 logs.go:276] 0 containers: []
	W0229 18:59:56.733415   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:56.733423   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:56.733491   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:56.772186   47919 cri.go:89] found id: ""
	I0229 18:59:56.772209   47919 logs.go:276] 0 containers: []
	W0229 18:59:56.772216   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:56.772222   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:56.772267   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:56.811919   47919 cri.go:89] found id: ""
	I0229 18:59:56.811964   47919 logs.go:276] 0 containers: []
	W0229 18:59:56.811977   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:56.811984   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:56.812035   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:56.849345   47919 cri.go:89] found id: ""
	I0229 18:59:56.849372   47919 logs.go:276] 0 containers: []
	W0229 18:59:56.849383   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:56.849390   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:56.849447   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:59:56.900091   47919 cri.go:89] found id: ""
	I0229 18:59:56.900119   47919 logs.go:276] 0 containers: []
	W0229 18:59:56.900129   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:59:56.900136   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 18:59:56.900193   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:59:56.937662   47919 cri.go:89] found id: ""
	I0229 18:59:56.937692   47919 logs.go:276] 0 containers: []
	W0229 18:59:56.937703   47919 logs.go:278] No container was found matching "kindnet"
	I0229 18:59:56.937710   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:59:56.937772   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:59:56.978195   47919 cri.go:89] found id: ""
	I0229 18:59:56.978224   47919 logs.go:276] 0 containers: []
	W0229 18:59:56.978234   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:59:56.978244   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 18:59:56.978259   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 18:59:57.059190   47919 logs.go:123] Gathering logs for container status ...
	I0229 18:59:57.059223   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:59:57.101416   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 18:59:57.101442   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:59:57.156102   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 18:59:57.156140   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:59:57.171401   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:59:57.171435   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:59:57.243717   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:59:59.744588   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:59.760099   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:59:59.760175   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:59:59.798722   47919 cri.go:89] found id: ""
	I0229 18:59:59.798751   47919 logs.go:276] 0 containers: []
	W0229 18:59:59.798762   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:59:59.798770   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 18:59:59.798830   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:59:59.842423   47919 cri.go:89] found id: ""
	I0229 18:59:59.842452   47919 logs.go:276] 0 containers: []
	W0229 18:59:59.842463   47919 logs.go:278] No container was found matching "etcd"
	I0229 18:59:59.842470   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 18:59:59.842532   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:59:59.883742   47919 cri.go:89] found id: ""
	I0229 18:59:59.883768   47919 logs.go:276] 0 containers: []
	W0229 18:59:59.883775   47919 logs.go:278] No container was found matching "coredns"
	I0229 18:59:59.883781   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:59:59.883826   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:59:59.924062   47919 cri.go:89] found id: ""
	I0229 18:59:59.924091   47919 logs.go:276] 0 containers: []
	W0229 18:59:59.924102   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:59:59.924109   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:59:59.924166   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:59:56.587621   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:59.087882   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:59.685911   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:02.185406   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:01.014202   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:03.014972   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:59.962465   47919 cri.go:89] found id: ""
	I0229 18:59:59.962497   47919 logs.go:276] 0 containers: []
	W0229 18:59:59.962508   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:59:59.962515   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:59:59.962576   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:00.006069   47919 cri.go:89] found id: ""
	I0229 19:00:00.006103   47919 logs.go:276] 0 containers: []
	W0229 19:00:00.006114   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:00.006123   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:00.006185   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:00.047671   47919 cri.go:89] found id: ""
	I0229 19:00:00.047697   47919 logs.go:276] 0 containers: []
	W0229 19:00:00.047709   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:00.047715   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:00.047773   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:00.091452   47919 cri.go:89] found id: ""
	I0229 19:00:00.091475   47919 logs.go:276] 0 containers: []
	W0229 19:00:00.091486   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:00.091497   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:00.091511   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:00.143282   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:00.143313   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:00.158342   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:00.158366   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:00.239745   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:00.239774   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:00.239792   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:00.339048   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:00.339083   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:02.898414   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:02.914154   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:02.914221   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:02.956122   47919 cri.go:89] found id: ""
	I0229 19:00:02.956151   47919 logs.go:276] 0 containers: []
	W0229 19:00:02.956211   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:02.956225   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:02.956272   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:02.993609   47919 cri.go:89] found id: ""
	I0229 19:00:02.993636   47919 logs.go:276] 0 containers: []
	W0229 19:00:02.993646   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:02.993659   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:02.993720   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:03.038131   47919 cri.go:89] found id: ""
	I0229 19:00:03.038152   47919 logs.go:276] 0 containers: []
	W0229 19:00:03.038160   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:03.038165   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:03.038217   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:03.090845   47919 cri.go:89] found id: ""
	I0229 19:00:03.090866   47919 logs.go:276] 0 containers: []
	W0229 19:00:03.090873   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:03.090878   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:03.090935   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:03.129520   47919 cri.go:89] found id: ""
	I0229 19:00:03.129549   47919 logs.go:276] 0 containers: []
	W0229 19:00:03.129561   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:03.129568   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:03.129620   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:03.178528   47919 cri.go:89] found id: ""
	I0229 19:00:03.178557   47919 logs.go:276] 0 containers: []
	W0229 19:00:03.178567   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:03.178575   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:03.178631   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:03.218337   47919 cri.go:89] found id: ""
	I0229 19:00:03.218357   47919 logs.go:276] 0 containers: []
	W0229 19:00:03.218364   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:03.218369   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:03.218417   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:03.267682   47919 cri.go:89] found id: ""
	I0229 19:00:03.267713   47919 logs.go:276] 0 containers: []
	W0229 19:00:03.267726   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:03.267735   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:03.267753   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:03.286961   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:03.286987   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:03.376514   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:03.376535   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:03.376546   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:03.459824   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:03.459872   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:03.505821   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:03.505848   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:01.582954   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:03.583198   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:04.684892   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:06.685508   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:05.015836   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:07.514376   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:06.062525   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:06.077637   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:06.077708   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:06.119344   47919 cri.go:89] found id: ""
	I0229 19:00:06.119368   47919 logs.go:276] 0 containers: []
	W0229 19:00:06.119376   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:06.119381   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:06.119430   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:06.158209   47919 cri.go:89] found id: ""
	I0229 19:00:06.158232   47919 logs.go:276] 0 containers: []
	W0229 19:00:06.158239   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:06.158245   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:06.158291   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:06.198521   47919 cri.go:89] found id: ""
	I0229 19:00:06.198545   47919 logs.go:276] 0 containers: []
	W0229 19:00:06.198553   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:06.198559   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:06.198609   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:06.235872   47919 cri.go:89] found id: ""
	I0229 19:00:06.235919   47919 logs.go:276] 0 containers: []
	W0229 19:00:06.235930   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:06.235937   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:06.235998   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:06.282814   47919 cri.go:89] found id: ""
	I0229 19:00:06.282841   47919 logs.go:276] 0 containers: []
	W0229 19:00:06.282853   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:06.282860   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:06.282928   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:06.330549   47919 cri.go:89] found id: ""
	I0229 19:00:06.330572   47919 logs.go:276] 0 containers: []
	W0229 19:00:06.330580   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:06.330585   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:06.330632   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:06.399968   47919 cri.go:89] found id: ""
	I0229 19:00:06.399996   47919 logs.go:276] 0 containers: []
	W0229 19:00:06.400006   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:06.400012   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:06.400062   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:06.444899   47919 cri.go:89] found id: ""
	I0229 19:00:06.444921   47919 logs.go:276] 0 containers: []
	W0229 19:00:06.444929   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:06.444937   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:06.444950   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:06.460552   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:06.460580   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:06.532932   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:06.532956   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:06.532969   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:06.615130   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:06.615170   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:06.664499   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:06.664532   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:09.219226   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:09.236769   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:09.236829   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:09.292309   47919 cri.go:89] found id: ""
	I0229 19:00:09.292331   47919 logs.go:276] 0 containers: []
	W0229 19:00:09.292339   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:09.292345   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:09.292392   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:09.355237   47919 cri.go:89] found id: ""
	I0229 19:00:09.355259   47919 logs.go:276] 0 containers: []
	W0229 19:00:09.355267   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:09.355272   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:09.355319   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:09.397950   47919 cri.go:89] found id: ""
	I0229 19:00:09.397977   47919 logs.go:276] 0 containers: []
	W0229 19:00:09.397987   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:09.397995   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:09.398057   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:09.436751   47919 cri.go:89] found id: ""
	I0229 19:00:09.436779   47919 logs.go:276] 0 containers: []
	W0229 19:00:09.436789   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:09.436797   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:09.436862   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:09.480288   47919 cri.go:89] found id: ""
	I0229 19:00:09.480311   47919 logs.go:276] 0 containers: []
	W0229 19:00:09.480318   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:09.480324   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:09.480375   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:09.523576   47919 cri.go:89] found id: ""
	I0229 19:00:09.523599   47919 logs.go:276] 0 containers: []
	W0229 19:00:09.523606   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:09.523611   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:09.523658   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:09.562818   47919 cri.go:89] found id: ""
	I0229 19:00:09.562848   47919 logs.go:276] 0 containers: []
	W0229 19:00:09.562859   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:09.562872   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:09.562919   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:09.603331   47919 cri.go:89] found id: ""
	I0229 19:00:09.603357   47919 logs.go:276] 0 containers: []
	W0229 19:00:09.603369   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:09.603379   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:09.603393   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:09.652060   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:09.652089   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:09.668372   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:09.668394   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:09.745897   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:09.745923   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:09.745937   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:09.826981   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:09.827014   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:05.590288   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:08.083411   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:10.084324   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:09.184577   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:11.185922   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:10.015288   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:12.513820   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:12.371447   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:12.385523   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:12.385613   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:12.422038   47919 cri.go:89] found id: ""
	I0229 19:00:12.422067   47919 logs.go:276] 0 containers: []
	W0229 19:00:12.422077   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:12.422084   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:12.422155   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:12.460443   47919 cri.go:89] found id: ""
	I0229 19:00:12.460470   47919 logs.go:276] 0 containers: []
	W0229 19:00:12.460487   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:12.460495   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:12.460551   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:12.502791   47919 cri.go:89] found id: ""
	I0229 19:00:12.502820   47919 logs.go:276] 0 containers: []
	W0229 19:00:12.502830   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:12.502838   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:12.502897   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:12.540738   47919 cri.go:89] found id: ""
	I0229 19:00:12.540769   47919 logs.go:276] 0 containers: []
	W0229 19:00:12.540780   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:12.540786   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:12.540845   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:12.580041   47919 cri.go:89] found id: ""
	I0229 19:00:12.580072   47919 logs.go:276] 0 containers: []
	W0229 19:00:12.580084   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:12.580091   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:12.580151   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:12.620721   47919 cri.go:89] found id: ""
	I0229 19:00:12.620750   47919 logs.go:276] 0 containers: []
	W0229 19:00:12.620758   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:12.620763   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:12.620820   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:12.659877   47919 cri.go:89] found id: ""
	I0229 19:00:12.659906   47919 logs.go:276] 0 containers: []
	W0229 19:00:12.659917   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:12.659925   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:12.659975   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:12.699133   47919 cri.go:89] found id: ""
	I0229 19:00:12.699160   47919 logs.go:276] 0 containers: []
	W0229 19:00:12.699170   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:12.699177   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:12.699188   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:12.742164   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:12.742189   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:12.792215   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:12.792248   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:12.808322   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:12.808344   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:12.879089   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:12.879114   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:12.879129   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:12.586572   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:15.083323   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:13.687899   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:16.184671   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:14.521430   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:17.013799   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:19.014661   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:15.466778   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:15.480875   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:15.480945   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:15.525331   47919 cri.go:89] found id: ""
	I0229 19:00:15.525353   47919 logs.go:276] 0 containers: []
	W0229 19:00:15.525360   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:15.525366   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:15.525422   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:15.567787   47919 cri.go:89] found id: ""
	I0229 19:00:15.567819   47919 logs.go:276] 0 containers: []
	W0229 19:00:15.567831   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:15.567838   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:15.567923   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:15.609440   47919 cri.go:89] found id: ""
	I0229 19:00:15.609467   47919 logs.go:276] 0 containers: []
	W0229 19:00:15.609477   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:15.609484   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:15.609559   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:15.650113   47919 cri.go:89] found id: ""
	I0229 19:00:15.650142   47919 logs.go:276] 0 containers: []
	W0229 19:00:15.650153   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:15.650161   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:15.650223   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:15.691499   47919 cri.go:89] found id: ""
	I0229 19:00:15.691527   47919 logs.go:276] 0 containers: []
	W0229 19:00:15.691537   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:15.691544   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:15.691603   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:15.731199   47919 cri.go:89] found id: ""
	I0229 19:00:15.731227   47919 logs.go:276] 0 containers: []
	W0229 19:00:15.731239   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:15.731246   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:15.731324   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:15.772997   47919 cri.go:89] found id: ""
	I0229 19:00:15.773019   47919 logs.go:276] 0 containers: []
	W0229 19:00:15.773027   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:15.773032   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:15.773091   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:15.811223   47919 cri.go:89] found id: ""
	I0229 19:00:15.811244   47919 logs.go:276] 0 containers: []
	W0229 19:00:15.811252   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:15.811271   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:15.811283   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:15.862159   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:15.862196   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:15.877436   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:15.877460   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:15.948486   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:15.948513   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:15.948525   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:16.030585   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:16.030617   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:18.592020   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:18.607286   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:18.607368   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:18.647886   47919 cri.go:89] found id: ""
	I0229 19:00:18.647913   47919 logs.go:276] 0 containers: []
	W0229 19:00:18.647924   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:18.647951   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:18.648007   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:18.687394   47919 cri.go:89] found id: ""
	I0229 19:00:18.687420   47919 logs.go:276] 0 containers: []
	W0229 19:00:18.687430   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:18.687436   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:18.687491   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:18.734159   47919 cri.go:89] found id: ""
	I0229 19:00:18.734187   47919 logs.go:276] 0 containers: []
	W0229 19:00:18.734198   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:18.734205   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:18.734262   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:18.782950   47919 cri.go:89] found id: ""
	I0229 19:00:18.782989   47919 logs.go:276] 0 containers: []
	W0229 19:00:18.783000   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:18.783008   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:18.783089   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:18.818695   47919 cri.go:89] found id: ""
	I0229 19:00:18.818723   47919 logs.go:276] 0 containers: []
	W0229 19:00:18.818734   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:18.818742   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:18.818805   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:18.859479   47919 cri.go:89] found id: ""
	I0229 19:00:18.859504   47919 logs.go:276] 0 containers: []
	W0229 19:00:18.859515   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:18.859522   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:18.859580   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:18.902897   47919 cri.go:89] found id: ""
	I0229 19:00:18.902923   47919 logs.go:276] 0 containers: []
	W0229 19:00:18.902934   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:18.902942   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:18.903002   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:18.947708   47919 cri.go:89] found id: ""
	I0229 19:00:18.947731   47919 logs.go:276] 0 containers: []
	W0229 19:00:18.947742   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:18.947752   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:18.947772   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:19.025069   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:19.025092   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:19.025107   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:19.115589   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:19.115626   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:19.164930   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:19.164960   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:19.217497   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:19.217531   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:17.584961   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:20.081558   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:18.685924   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:21.184830   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:21.015314   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:23.513573   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:21.733516   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:21.748586   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:21.748648   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:21.788383   47919 cri.go:89] found id: ""
	I0229 19:00:21.788409   47919 logs.go:276] 0 containers: []
	W0229 19:00:21.788420   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:21.788429   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:21.788487   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:21.827147   47919 cri.go:89] found id: ""
	I0229 19:00:21.827176   47919 logs.go:276] 0 containers: []
	W0229 19:00:21.827187   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:21.827194   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:21.827255   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:21.867525   47919 cri.go:89] found id: ""
	I0229 19:00:21.867552   47919 logs.go:276] 0 containers: []
	W0229 19:00:21.867561   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:21.867570   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:21.867618   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:21.911542   47919 cri.go:89] found id: ""
	I0229 19:00:21.911564   47919 logs.go:276] 0 containers: []
	W0229 19:00:21.911573   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:21.911578   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:21.911629   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:21.949779   47919 cri.go:89] found id: ""
	I0229 19:00:21.949803   47919 logs.go:276] 0 containers: []
	W0229 19:00:21.949815   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:21.949821   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:21.949877   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:21.989663   47919 cri.go:89] found id: ""
	I0229 19:00:21.989692   47919 logs.go:276] 0 containers: []
	W0229 19:00:21.989701   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:21.989706   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:21.989750   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:22.040777   47919 cri.go:89] found id: ""
	I0229 19:00:22.040803   47919 logs.go:276] 0 containers: []
	W0229 19:00:22.040813   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:22.040820   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:22.040876   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:22.100661   47919 cri.go:89] found id: ""
	I0229 19:00:22.100682   47919 logs.go:276] 0 containers: []
	W0229 19:00:22.100689   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:22.100697   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:22.100707   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:22.165652   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:22.165682   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:22.180278   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:22.180301   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:22.250220   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:22.250242   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:22.250254   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:22.339122   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:22.339160   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:24.894485   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:24.910480   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:24.910555   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:22.086489   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:24.582331   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:23.685199   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:26.185268   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:25.514168   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:28.014178   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:24.949857   47919 cri.go:89] found id: ""
	I0229 19:00:24.949880   47919 logs.go:276] 0 containers: []
	W0229 19:00:24.949891   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:24.949898   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:24.949968   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:24.993325   47919 cri.go:89] found id: ""
	I0229 19:00:24.993355   47919 logs.go:276] 0 containers: []
	W0229 19:00:24.993366   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:24.993374   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:24.993431   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:25.053180   47919 cri.go:89] found id: ""
	I0229 19:00:25.053201   47919 logs.go:276] 0 containers: []
	W0229 19:00:25.053208   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:25.053214   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:25.053269   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:25.105886   47919 cri.go:89] found id: ""
	I0229 19:00:25.105912   47919 logs.go:276] 0 containers: []
	W0229 19:00:25.105919   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:25.105924   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:25.105969   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:25.161860   47919 cri.go:89] found id: ""
	I0229 19:00:25.161889   47919 logs.go:276] 0 containers: []
	W0229 19:00:25.161907   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:25.161918   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:25.161982   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:25.208566   47919 cri.go:89] found id: ""
	I0229 19:00:25.208591   47919 logs.go:276] 0 containers: []
	W0229 19:00:25.208601   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:25.208625   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:25.208690   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:25.252151   47919 cri.go:89] found id: ""
	I0229 19:00:25.252173   47919 logs.go:276] 0 containers: []
	W0229 19:00:25.252183   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:25.252190   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:25.252255   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:25.293860   47919 cri.go:89] found id: ""
	I0229 19:00:25.293892   47919 logs.go:276] 0 containers: []
	W0229 19:00:25.293903   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:25.293913   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:25.293926   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:25.343332   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:25.343367   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:25.357855   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:25.357883   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:25.438031   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:25.438052   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:25.438064   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:25.523752   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:25.523789   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:28.078701   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:28.103422   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:28.103514   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:28.149369   47919 cri.go:89] found id: ""
	I0229 19:00:28.149396   47919 logs.go:276] 0 containers: []
	W0229 19:00:28.149407   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:28.149414   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:28.149481   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:28.191312   47919 cri.go:89] found id: ""
	I0229 19:00:28.191340   47919 logs.go:276] 0 containers: []
	W0229 19:00:28.191350   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:28.191357   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:28.191422   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:28.232257   47919 cri.go:89] found id: ""
	I0229 19:00:28.232283   47919 logs.go:276] 0 containers: []
	W0229 19:00:28.232293   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:28.232301   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:28.232370   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:28.278477   47919 cri.go:89] found id: ""
	I0229 19:00:28.278502   47919 logs.go:276] 0 containers: []
	W0229 19:00:28.278512   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:28.278520   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:28.278580   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:28.319368   47919 cri.go:89] found id: ""
	I0229 19:00:28.319393   47919 logs.go:276] 0 containers: []
	W0229 19:00:28.319401   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:28.319406   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:28.319451   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:28.363604   47919 cri.go:89] found id: ""
	I0229 19:00:28.363628   47919 logs.go:276] 0 containers: []
	W0229 19:00:28.363636   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:28.363642   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:28.363688   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:28.403101   47919 cri.go:89] found id: ""
	I0229 19:00:28.403126   47919 logs.go:276] 0 containers: []
	W0229 19:00:28.403137   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:28.403144   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:28.403203   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:28.443915   47919 cri.go:89] found id: ""
	I0229 19:00:28.443939   47919 logs.go:276] 0 containers: []
	W0229 19:00:28.443949   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:28.443961   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:28.443974   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:28.459084   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:28.459112   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:28.531798   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:28.531827   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:28.531843   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:28.618141   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:28.618182   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:28.664993   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:28.665024   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:26.582801   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:28.584979   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:28.684541   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:31.184185   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:30.014681   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:32.513959   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:31.218793   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:31.234816   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:31.234890   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:31.273656   47919 cri.go:89] found id: ""
	I0229 19:00:31.273684   47919 logs.go:276] 0 containers: []
	W0229 19:00:31.273692   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:31.273698   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:31.273744   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:31.316292   47919 cri.go:89] found id: ""
	I0229 19:00:31.316314   47919 logs.go:276] 0 containers: []
	W0229 19:00:31.316322   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:31.316330   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:31.316391   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:31.356701   47919 cri.go:89] found id: ""
	I0229 19:00:31.356730   47919 logs.go:276] 0 containers: []
	W0229 19:00:31.356742   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:31.356760   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:31.356813   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:31.395796   47919 cri.go:89] found id: ""
	I0229 19:00:31.395822   47919 logs.go:276] 0 containers: []
	W0229 19:00:31.395830   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:31.395835   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:31.395884   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:31.436461   47919 cri.go:89] found id: ""
	I0229 19:00:31.436483   47919 logs.go:276] 0 containers: []
	W0229 19:00:31.436491   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:31.436496   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:31.436543   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:31.482802   47919 cri.go:89] found id: ""
	I0229 19:00:31.482830   47919 logs.go:276] 0 containers: []
	W0229 19:00:31.482840   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:31.482848   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:31.482895   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:31.525897   47919 cri.go:89] found id: ""
	I0229 19:00:31.525930   47919 logs.go:276] 0 containers: []
	W0229 19:00:31.525939   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:31.525949   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:31.526009   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:31.566323   47919 cri.go:89] found id: ""
	I0229 19:00:31.566350   47919 logs.go:276] 0 containers: []
	W0229 19:00:31.566362   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:31.566372   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:31.566388   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:31.618633   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:31.618674   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:31.634144   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:31.634166   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:31.712112   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:31.712136   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:31.712150   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:31.795159   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:31.795190   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:34.365419   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:34.380447   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:34.380521   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:34.422256   47919 cri.go:89] found id: ""
	I0229 19:00:34.422284   47919 logs.go:276] 0 containers: []
	W0229 19:00:34.422295   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:34.422302   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:34.422359   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:34.466548   47919 cri.go:89] found id: ""
	I0229 19:00:34.466578   47919 logs.go:276] 0 containers: []
	W0229 19:00:34.466588   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:34.466596   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:34.466654   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:34.508359   47919 cri.go:89] found id: ""
	I0229 19:00:34.508395   47919 logs.go:276] 0 containers: []
	W0229 19:00:34.508407   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:34.508414   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:34.508482   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:34.551284   47919 cri.go:89] found id: ""
	I0229 19:00:34.551308   47919 logs.go:276] 0 containers: []
	W0229 19:00:34.551319   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:34.551325   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:34.551371   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:34.593360   47919 cri.go:89] found id: ""
	I0229 19:00:34.593385   47919 logs.go:276] 0 containers: []
	W0229 19:00:34.593395   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:34.593403   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:34.593469   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:34.632097   47919 cri.go:89] found id: ""
	I0229 19:00:34.632117   47919 logs.go:276] 0 containers: []
	W0229 19:00:34.632124   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:34.632135   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:34.632180   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:34.679495   47919 cri.go:89] found id: ""
	I0229 19:00:34.679521   47919 logs.go:276] 0 containers: []
	W0229 19:00:34.679529   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:34.679534   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:34.679580   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:34.723322   47919 cri.go:89] found id: ""
	I0229 19:00:34.723351   47919 logs.go:276] 0 containers: []
	W0229 19:00:34.723361   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:34.723371   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:34.723387   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:34.741497   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:34.741525   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:34.833908   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:34.833932   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:34.833944   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:34.927172   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:34.927203   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:31.083690   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:33.583972   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:33.186129   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:35.685350   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:34.514619   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:36.514937   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:39.014137   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:34.980487   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:34.980520   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:37.535829   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:37.551274   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:37.551342   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:37.590225   47919 cri.go:89] found id: ""
	I0229 19:00:37.590263   47919 logs.go:276] 0 containers: []
	W0229 19:00:37.590282   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:37.590289   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:37.590347   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:37.630546   47919 cri.go:89] found id: ""
	I0229 19:00:37.630574   47919 logs.go:276] 0 containers: []
	W0229 19:00:37.630585   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:37.630592   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:37.630651   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:37.676219   47919 cri.go:89] found id: ""
	I0229 19:00:37.676250   47919 logs.go:276] 0 containers: []
	W0229 19:00:37.676261   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:37.676268   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:37.676329   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:37.713689   47919 cri.go:89] found id: ""
	I0229 19:00:37.713712   47919 logs.go:276] 0 containers: []
	W0229 19:00:37.713721   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:37.713729   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:37.713791   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:37.767999   47919 cri.go:89] found id: ""
	I0229 19:00:37.768034   47919 logs.go:276] 0 containers: []
	W0229 19:00:37.768049   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:37.768057   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:37.768114   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:37.816836   47919 cri.go:89] found id: ""
	I0229 19:00:37.816865   47919 logs.go:276] 0 containers: []
	W0229 19:00:37.816876   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:37.816884   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:37.816948   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:37.876044   47919 cri.go:89] found id: ""
	I0229 19:00:37.876072   47919 logs.go:276] 0 containers: []
	W0229 19:00:37.876084   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:37.876091   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:37.876151   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:37.926075   47919 cri.go:89] found id: ""
	I0229 19:00:37.926110   47919 logs.go:276] 0 containers: []
	W0229 19:00:37.926122   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:37.926132   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:37.926147   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:38.004621   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:38.004648   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:38.004663   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:38.091456   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:38.091493   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:38.140118   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:38.140144   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:38.197206   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:38.197243   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:35.587937   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:38.082516   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:40.083269   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:38.184999   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:40.684029   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:42.684537   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:41.016248   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:43.018730   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:40.713817   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:40.731550   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:40.731613   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:40.787760   47919 cri.go:89] found id: ""
	I0229 19:00:40.787788   47919 logs.go:276] 0 containers: []
	W0229 19:00:40.787798   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:40.787806   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:40.787868   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:40.847842   47919 cri.go:89] found id: ""
	I0229 19:00:40.847870   47919 logs.go:276] 0 containers: []
	W0229 19:00:40.847881   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:40.847888   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:40.847956   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:40.888452   47919 cri.go:89] found id: ""
	I0229 19:00:40.888481   47919 logs.go:276] 0 containers: []
	W0229 19:00:40.888493   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:40.888501   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:40.888562   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:40.927727   47919 cri.go:89] found id: ""
	I0229 19:00:40.927749   47919 logs.go:276] 0 containers: []
	W0229 19:00:40.927757   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:40.927762   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:40.927821   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:40.967696   47919 cri.go:89] found id: ""
	I0229 19:00:40.967725   47919 logs.go:276] 0 containers: []
	W0229 19:00:40.967737   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:40.967745   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:40.967804   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:41.008092   47919 cri.go:89] found id: ""
	I0229 19:00:41.008117   47919 logs.go:276] 0 containers: []
	W0229 19:00:41.008127   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:41.008135   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:41.008190   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:41.049235   47919 cri.go:89] found id: ""
	I0229 19:00:41.049265   47919 logs.go:276] 0 containers: []
	W0229 19:00:41.049277   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:41.049285   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:41.049393   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:41.092962   47919 cri.go:89] found id: ""
	I0229 19:00:41.092988   47919 logs.go:276] 0 containers: []
	W0229 19:00:41.092999   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:41.093018   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:41.093033   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:41.146322   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:41.146368   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:41.161961   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:41.161986   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:41.248674   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:41.248705   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:41.248732   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:41.333647   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:41.333689   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:43.882007   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:43.897786   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:43.897860   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:43.943918   47919 cri.go:89] found id: ""
	I0229 19:00:43.943946   47919 logs.go:276] 0 containers: []
	W0229 19:00:43.943955   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:43.943960   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:43.944010   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:43.988622   47919 cri.go:89] found id: ""
	I0229 19:00:43.988643   47919 logs.go:276] 0 containers: []
	W0229 19:00:43.988650   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:43.988655   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:43.988699   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:44.036419   47919 cri.go:89] found id: ""
	I0229 19:00:44.036455   47919 logs.go:276] 0 containers: []
	W0229 19:00:44.036466   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:44.036471   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:44.036530   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:44.078018   47919 cri.go:89] found id: ""
	I0229 19:00:44.078046   47919 logs.go:276] 0 containers: []
	W0229 19:00:44.078056   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:44.078063   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:44.078119   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:44.116142   47919 cri.go:89] found id: ""
	I0229 19:00:44.116168   47919 logs.go:276] 0 containers: []
	W0229 19:00:44.116177   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:44.116183   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:44.116243   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:44.158804   47919 cri.go:89] found id: ""
	I0229 19:00:44.158826   47919 logs.go:276] 0 containers: []
	W0229 19:00:44.158833   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:44.158839   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:44.158889   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:44.204069   47919 cri.go:89] found id: ""
	I0229 19:00:44.204096   47919 logs.go:276] 0 containers: []
	W0229 19:00:44.204106   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:44.204114   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:44.204173   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:44.247904   47919 cri.go:89] found id: ""
	I0229 19:00:44.247935   47919 logs.go:276] 0 containers: []
	W0229 19:00:44.247949   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:44.247959   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:44.247973   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:44.338653   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:44.338690   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:44.384041   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:44.384069   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:44.439539   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:44.439575   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:44.455345   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:44.455372   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:44.538204   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:42.083656   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:44.584493   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:45.184119   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:47.684925   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:45.513638   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:48.014638   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:47.038895   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:47.054457   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:47.054539   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:47.099854   47919 cri.go:89] found id: ""
	I0229 19:00:47.099879   47919 logs.go:276] 0 containers: []
	W0229 19:00:47.099890   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:47.099899   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:47.099956   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:47.141354   47919 cri.go:89] found id: ""
	I0229 19:00:47.141381   47919 logs.go:276] 0 containers: []
	W0229 19:00:47.141391   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:47.141398   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:47.141454   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:47.181906   47919 cri.go:89] found id: ""
	I0229 19:00:47.181932   47919 logs.go:276] 0 containers: []
	W0229 19:00:47.181942   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:47.181949   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:47.182003   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:47.222505   47919 cri.go:89] found id: ""
	I0229 19:00:47.222530   47919 logs.go:276] 0 containers: []
	W0229 19:00:47.222538   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:47.222548   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:47.222603   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:47.265567   47919 cri.go:89] found id: ""
	I0229 19:00:47.265604   47919 logs.go:276] 0 containers: []
	W0229 19:00:47.265616   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:47.265625   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:47.265690   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:47.304698   47919 cri.go:89] found id: ""
	I0229 19:00:47.304723   47919 logs.go:276] 0 containers: []
	W0229 19:00:47.304730   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:47.304736   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:47.304781   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:47.344154   47919 cri.go:89] found id: ""
	I0229 19:00:47.344175   47919 logs.go:276] 0 containers: []
	W0229 19:00:47.344182   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:47.344187   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:47.344230   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:47.383849   47919 cri.go:89] found id: ""
	I0229 19:00:47.383878   47919 logs.go:276] 0 containers: []
	W0229 19:00:47.383889   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:47.383900   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:47.383915   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:47.458895   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:47.458914   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:47.458933   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:47.547776   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:47.547823   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:47.622606   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:47.622639   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:47.685327   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:47.685356   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:47.084225   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:49.584008   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:50.186274   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:52.684452   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:50.014671   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:52.514321   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:50.202151   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:50.218008   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:50.218063   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:50.265322   47919 cri.go:89] found id: ""
	I0229 19:00:50.265345   47919 logs.go:276] 0 containers: []
	W0229 19:00:50.265353   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:50.265358   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:50.265424   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:50.305646   47919 cri.go:89] found id: ""
	I0229 19:00:50.305669   47919 logs.go:276] 0 containers: []
	W0229 19:00:50.305677   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:50.305682   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:50.305732   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:50.342855   47919 cri.go:89] found id: ""
	I0229 19:00:50.342885   47919 logs.go:276] 0 containers: []
	W0229 19:00:50.342894   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:50.342899   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:50.342948   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:50.385365   47919 cri.go:89] found id: ""
	I0229 19:00:50.385396   47919 logs.go:276] 0 containers: []
	W0229 19:00:50.385404   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:50.385410   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:50.385456   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:50.425212   47919 cri.go:89] found id: ""
	I0229 19:00:50.425238   47919 logs.go:276] 0 containers: []
	W0229 19:00:50.425256   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:50.425263   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:50.425321   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:50.465325   47919 cri.go:89] found id: ""
	I0229 19:00:50.465355   47919 logs.go:276] 0 containers: []
	W0229 19:00:50.465366   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:50.465382   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:50.465455   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:50.516256   47919 cri.go:89] found id: ""
	I0229 19:00:50.516282   47919 logs.go:276] 0 containers: []
	W0229 19:00:50.516291   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:50.516297   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:50.516355   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:50.562233   47919 cri.go:89] found id: ""
	I0229 19:00:50.562262   47919 logs.go:276] 0 containers: []
	W0229 19:00:50.562272   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:50.562280   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:50.562292   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:50.660311   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:50.660346   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:50.702790   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:50.702815   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:50.752085   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:50.752123   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:50.768346   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:50.768378   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:50.842567   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:53.343011   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:53.358002   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:53.358072   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:53.398397   47919 cri.go:89] found id: ""
	I0229 19:00:53.398424   47919 logs.go:276] 0 containers: []
	W0229 19:00:53.398433   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:53.398440   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:53.398501   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:53.437020   47919 cri.go:89] found id: ""
	I0229 19:00:53.437048   47919 logs.go:276] 0 containers: []
	W0229 19:00:53.437059   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:53.437067   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:53.437116   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:53.473350   47919 cri.go:89] found id: ""
	I0229 19:00:53.473377   47919 logs.go:276] 0 containers: []
	W0229 19:00:53.473388   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:53.473395   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:53.473454   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:53.525678   47919 cri.go:89] found id: ""
	I0229 19:00:53.525701   47919 logs.go:276] 0 containers: []
	W0229 19:00:53.525708   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:53.525716   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:53.525772   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:53.595411   47919 cri.go:89] found id: ""
	I0229 19:00:53.595437   47919 logs.go:276] 0 containers: []
	W0229 19:00:53.595448   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:53.595456   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:53.595518   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:53.635890   47919 cri.go:89] found id: ""
	I0229 19:00:53.635916   47919 logs.go:276] 0 containers: []
	W0229 19:00:53.635923   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:53.635929   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:53.635992   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:53.674966   47919 cri.go:89] found id: ""
	I0229 19:00:53.674992   47919 logs.go:276] 0 containers: []
	W0229 19:00:53.675000   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:53.675005   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:53.675076   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:53.713839   47919 cri.go:89] found id: ""
	I0229 19:00:53.713860   47919 logs.go:276] 0 containers: []
	W0229 19:00:53.713868   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:53.713882   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:53.713896   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:53.765185   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:53.765219   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:53.780830   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:53.780855   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:53.858528   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:53.858552   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:53.858567   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:53.936002   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:53.936034   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:52.085082   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:54.583306   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:55.184645   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:57.684780   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:55.015395   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:57.015941   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:59.017683   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:56.481406   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:56.498980   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:56.499059   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:56.557482   47919 cri.go:89] found id: ""
	I0229 19:00:56.557509   47919 logs.go:276] 0 containers: []
	W0229 19:00:56.557520   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:56.557528   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:56.557587   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:56.625912   47919 cri.go:89] found id: ""
	I0229 19:00:56.625941   47919 logs.go:276] 0 containers: []
	W0229 19:00:56.625952   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:56.625964   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:56.626023   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:56.663104   47919 cri.go:89] found id: ""
	I0229 19:00:56.663193   47919 logs.go:276] 0 containers: []
	W0229 19:00:56.663210   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:56.663217   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:56.663265   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:56.707473   47919 cri.go:89] found id: ""
	I0229 19:00:56.707494   47919 logs.go:276] 0 containers: []
	W0229 19:00:56.707502   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:56.707507   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:56.707564   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:56.752569   47919 cri.go:89] found id: ""
	I0229 19:00:56.752593   47919 logs.go:276] 0 containers: []
	W0229 19:00:56.752604   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:56.752611   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:56.752673   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:56.793618   47919 cri.go:89] found id: ""
	I0229 19:00:56.793660   47919 logs.go:276] 0 containers: []
	W0229 19:00:56.793672   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:56.793680   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:56.793741   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:56.833215   47919 cri.go:89] found id: ""
	I0229 19:00:56.833241   47919 logs.go:276] 0 containers: []
	W0229 19:00:56.833252   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:56.833259   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:56.833319   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:56.873162   47919 cri.go:89] found id: ""
	I0229 19:00:56.873187   47919 logs.go:276] 0 containers: []
	W0229 19:00:56.873195   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:56.873203   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:56.873219   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:56.887683   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:56.887707   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:56.957351   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:56.957369   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:00:56.957380   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:00:57.042415   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:00:57.042449   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:57.087636   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:57.087660   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:00:59.637662   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:00:59.652747   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:00:59.652815   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:00:59.692780   47919 cri.go:89] found id: ""
	I0229 19:00:59.692801   47919 logs.go:276] 0 containers: []
	W0229 19:00:59.692809   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:59.692814   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:00:59.692891   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:00:59.733445   47919 cri.go:89] found id: ""
	I0229 19:00:59.733474   47919 logs.go:276] 0 containers: []
	W0229 19:00:59.733482   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:00:59.733488   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:00:59.733535   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:00:59.769723   47919 cri.go:89] found id: ""
	I0229 19:00:59.769754   47919 logs.go:276] 0 containers: []
	W0229 19:00:59.769764   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:00:59.769770   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:00:59.769828   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:00:59.807810   47919 cri.go:89] found id: ""
	I0229 19:00:59.807837   47919 logs.go:276] 0 containers: []
	W0229 19:00:59.807848   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:59.807855   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:00:59.807916   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:00:59.849623   47919 cri.go:89] found id: ""
	I0229 19:00:59.849649   47919 logs.go:276] 0 containers: []
	W0229 19:00:59.849659   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:59.849666   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:00:59.849730   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:00:59.895593   47919 cri.go:89] found id: ""
	I0229 19:00:59.895620   47919 logs.go:276] 0 containers: []
	W0229 19:00:59.895631   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:59.895638   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:00:59.895698   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:00:59.935693   47919 cri.go:89] found id: ""
	I0229 19:00:59.935716   47919 logs.go:276] 0 containers: []
	W0229 19:00:59.935724   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:59.935729   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:00:59.935786   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:00:56.585093   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:59.083485   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:59.687672   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:02.184276   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:01.027786   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:03.514296   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:00:59.977655   47919 cri.go:89] found id: ""
	I0229 19:00:59.977685   47919 logs.go:276] 0 containers: []
	W0229 19:00:59.977693   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:00:59.977710   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:59.977725   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:59.992518   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:59.992545   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:00.075660   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:00.075679   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:00.075691   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:00.162338   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:00.162384   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:00.207000   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:00.207049   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:02.759942   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:02.776225   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:02.776293   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:02.812511   47919 cri.go:89] found id: ""
	I0229 19:01:02.812538   47919 logs.go:276] 0 containers: []
	W0229 19:01:02.812549   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:02.812556   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:02.812614   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:02.851417   47919 cri.go:89] found id: ""
	I0229 19:01:02.851448   47919 logs.go:276] 0 containers: []
	W0229 19:01:02.851467   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:02.851483   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:02.851560   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:02.894440   47919 cri.go:89] found id: ""
	I0229 19:01:02.894465   47919 logs.go:276] 0 containers: []
	W0229 19:01:02.894475   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:02.894487   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:02.894542   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:02.931046   47919 cri.go:89] found id: ""
	I0229 19:01:02.931075   47919 logs.go:276] 0 containers: []
	W0229 19:01:02.931084   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:02.931092   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:02.931150   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:02.971204   47919 cri.go:89] found id: ""
	I0229 19:01:02.971226   47919 logs.go:276] 0 containers: []
	W0229 19:01:02.971233   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:02.971238   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:02.971307   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:03.011695   47919 cri.go:89] found id: ""
	I0229 19:01:03.011723   47919 logs.go:276] 0 containers: []
	W0229 19:01:03.011734   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:03.011741   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:03.011796   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:03.054738   47919 cri.go:89] found id: ""
	I0229 19:01:03.054763   47919 logs.go:276] 0 containers: []
	W0229 19:01:03.054775   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:03.054782   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:03.054857   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:03.099242   47919 cri.go:89] found id: ""
	I0229 19:01:03.099267   47919 logs.go:276] 0 containers: []
	W0229 19:01:03.099278   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:03.099289   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:03.099303   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:03.148748   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:03.148778   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:03.164550   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:03.164578   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:03.241564   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:03.241586   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:03.241601   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:03.329350   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:03.329384   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:01.085890   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:03.582960   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:04.683846   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:06.684979   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:05.514444   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:08.014275   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:05.884415   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:05.901979   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:05.902044   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:05.946382   47919 cri.go:89] found id: ""
	I0229 19:01:05.946407   47919 logs.go:276] 0 containers: []
	W0229 19:01:05.946415   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:05.946421   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:05.946488   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:05.991783   47919 cri.go:89] found id: ""
	I0229 19:01:05.991807   47919 logs.go:276] 0 containers: []
	W0229 19:01:05.991816   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:05.991822   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:05.991879   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:06.034390   47919 cri.go:89] found id: ""
	I0229 19:01:06.034417   47919 logs.go:276] 0 containers: []
	W0229 19:01:06.034426   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:06.034431   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:06.034475   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:06.078417   47919 cri.go:89] found id: ""
	I0229 19:01:06.078445   47919 logs.go:276] 0 containers: []
	W0229 19:01:06.078456   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:06.078463   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:06.078527   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:06.119892   47919 cri.go:89] found id: ""
	I0229 19:01:06.119927   47919 logs.go:276] 0 containers: []
	W0229 19:01:06.119938   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:06.119952   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:06.120008   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:06.159308   47919 cri.go:89] found id: ""
	I0229 19:01:06.159332   47919 logs.go:276] 0 containers: []
	W0229 19:01:06.159339   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:06.159346   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:06.159410   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:06.208715   47919 cri.go:89] found id: ""
	I0229 19:01:06.208742   47919 logs.go:276] 0 containers: []
	W0229 19:01:06.208751   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:06.208756   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:06.208812   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:06.253831   47919 cri.go:89] found id: ""
	I0229 19:01:06.253858   47919 logs.go:276] 0 containers: []
	W0229 19:01:06.253866   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:06.253881   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:06.253895   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:06.315105   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:06.315141   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:06.349340   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:06.349386   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:06.431456   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:06.431477   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:06.431492   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:06.517754   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:06.517783   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:09.064267   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:09.078751   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:09.078822   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:09.130371   47919 cri.go:89] found id: ""
	I0229 19:01:09.130396   47919 logs.go:276] 0 containers: []
	W0229 19:01:09.130404   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:09.130410   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:09.130461   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:09.166312   47919 cri.go:89] found id: ""
	I0229 19:01:09.166340   47919 logs.go:276] 0 containers: []
	W0229 19:01:09.166351   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:09.166359   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:09.166415   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:09.202957   47919 cri.go:89] found id: ""
	I0229 19:01:09.202978   47919 logs.go:276] 0 containers: []
	W0229 19:01:09.202985   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:09.202991   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:09.203050   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:09.242350   47919 cri.go:89] found id: ""
	I0229 19:01:09.242380   47919 logs.go:276] 0 containers: []
	W0229 19:01:09.242391   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:09.242399   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:09.242455   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:09.300471   47919 cri.go:89] found id: ""
	I0229 19:01:09.300492   47919 logs.go:276] 0 containers: []
	W0229 19:01:09.300500   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:09.300505   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:09.300568   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:09.356861   47919 cri.go:89] found id: ""
	I0229 19:01:09.356886   47919 logs.go:276] 0 containers: []
	W0229 19:01:09.356893   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:09.356898   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:09.356965   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:09.411042   47919 cri.go:89] found id: ""
	I0229 19:01:09.411067   47919 logs.go:276] 0 containers: []
	W0229 19:01:09.411075   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:09.411080   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:09.411136   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:09.446312   47919 cri.go:89] found id: ""
	I0229 19:01:09.446336   47919 logs.go:276] 0 containers: []
	W0229 19:01:09.446347   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:09.446356   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:09.446367   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:09.492195   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:09.492227   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:09.541943   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:09.541973   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:09.557347   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:09.557373   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:09.635319   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:09.635363   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:09.635379   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:05.584255   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:08.082899   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:10.083808   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:09.189158   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:11.684731   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:10.513801   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:12.514492   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:12.224271   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:12.243330   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:12.243403   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:12.285525   47919 cri.go:89] found id: ""
	I0229 19:01:12.285547   47919 logs.go:276] 0 containers: []
	W0229 19:01:12.285556   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:12.285561   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:12.285617   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:12.347511   47919 cri.go:89] found id: ""
	I0229 19:01:12.347535   47919 logs.go:276] 0 containers: []
	W0229 19:01:12.347543   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:12.347548   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:12.347593   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:12.392145   47919 cri.go:89] found id: ""
	I0229 19:01:12.392207   47919 logs.go:276] 0 containers: []
	W0229 19:01:12.392231   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:12.392248   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:12.392366   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:12.430238   47919 cri.go:89] found id: ""
	I0229 19:01:12.430268   47919 logs.go:276] 0 containers: []
	W0229 19:01:12.430278   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:12.430286   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:12.430345   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:12.473019   47919 cri.go:89] found id: ""
	I0229 19:01:12.473054   47919 logs.go:276] 0 containers: []
	W0229 19:01:12.473065   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:12.473072   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:12.473131   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:12.510653   47919 cri.go:89] found id: ""
	I0229 19:01:12.510681   47919 logs.go:276] 0 containers: []
	W0229 19:01:12.510692   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:12.510699   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:12.510759   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:12.548137   47919 cri.go:89] found id: ""
	I0229 19:01:12.548163   47919 logs.go:276] 0 containers: []
	W0229 19:01:12.548171   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:12.548176   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:12.548232   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:12.588416   47919 cri.go:89] found id: ""
	I0229 19:01:12.588435   47919 logs.go:276] 0 containers: []
	W0229 19:01:12.588443   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:12.588452   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:12.588467   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:12.603651   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:12.603681   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:12.681060   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:12.681081   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:12.681094   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:12.764839   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:12.764870   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:12.807178   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:12.807202   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:12.583319   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:14.583681   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:14.184569   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:16.185919   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:14.514955   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:17.014358   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:19.016452   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:15.357205   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:15.382491   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:15.382571   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:15.422538   47919 cri.go:89] found id: ""
	I0229 19:01:15.422561   47919 logs.go:276] 0 containers: []
	W0229 19:01:15.422568   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:15.422577   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:15.422635   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:15.464564   47919 cri.go:89] found id: ""
	I0229 19:01:15.464593   47919 logs.go:276] 0 containers: []
	W0229 19:01:15.464601   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:15.464607   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:15.464662   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:15.502625   47919 cri.go:89] found id: ""
	I0229 19:01:15.502650   47919 logs.go:276] 0 containers: []
	W0229 19:01:15.502662   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:15.502669   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:15.502724   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:15.543187   47919 cri.go:89] found id: ""
	I0229 19:01:15.543215   47919 logs.go:276] 0 containers: []
	W0229 19:01:15.543229   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:15.543234   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:15.543283   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:15.585273   47919 cri.go:89] found id: ""
	I0229 19:01:15.585296   47919 logs.go:276] 0 containers: []
	W0229 19:01:15.585306   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:15.585314   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:15.585386   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:15.626180   47919 cri.go:89] found id: ""
	I0229 19:01:15.626208   47919 logs.go:276] 0 containers: []
	W0229 19:01:15.626219   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:15.626227   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:15.626288   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:15.670572   47919 cri.go:89] found id: ""
	I0229 19:01:15.670596   47919 logs.go:276] 0 containers: []
	W0229 19:01:15.670604   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:15.670610   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:15.670657   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:15.710549   47919 cri.go:89] found id: ""
	I0229 19:01:15.710587   47919 logs.go:276] 0 containers: []
	W0229 19:01:15.710595   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:15.710604   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:15.710618   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:15.765148   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:15.765180   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:15.780717   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:15.780742   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:15.852811   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:15.852835   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:15.852856   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:15.930728   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:15.930759   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:18.483798   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:18.497545   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:18.497611   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:18.540226   47919 cri.go:89] found id: ""
	I0229 19:01:18.540256   47919 logs.go:276] 0 containers: []
	W0229 19:01:18.540266   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:18.540274   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:18.540336   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:18.578106   47919 cri.go:89] found id: ""
	I0229 19:01:18.578124   47919 logs.go:276] 0 containers: []
	W0229 19:01:18.578134   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:18.578142   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:18.578192   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:18.617138   47919 cri.go:89] found id: ""
	I0229 19:01:18.617167   47919 logs.go:276] 0 containers: []
	W0229 19:01:18.617178   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:18.617185   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:18.617242   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:18.654667   47919 cri.go:89] found id: ""
	I0229 19:01:18.654762   47919 logs.go:276] 0 containers: []
	W0229 19:01:18.654779   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:18.654787   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:18.654845   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:18.695837   47919 cri.go:89] found id: ""
	I0229 19:01:18.695859   47919 logs.go:276] 0 containers: []
	W0229 19:01:18.695866   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:18.695875   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:18.695929   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:18.738178   47919 cri.go:89] found id: ""
	I0229 19:01:18.738199   47919 logs.go:276] 0 containers: []
	W0229 19:01:18.738206   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:18.738211   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:18.738259   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:18.777018   47919 cri.go:89] found id: ""
	I0229 19:01:18.777044   47919 logs.go:276] 0 containers: []
	W0229 19:01:18.777052   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:18.777058   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:18.777102   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:18.820701   47919 cri.go:89] found id: ""
	I0229 19:01:18.820723   47919 logs.go:276] 0 containers: []
	W0229 19:01:18.820734   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:18.820746   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:18.820762   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:18.907150   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:18.907182   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:18.950363   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:18.950393   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:18.999446   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:18.999479   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:19.020681   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:19.020714   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:19.139305   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:17.083357   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:19.087286   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:18.684811   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:20.684974   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:22.685289   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:21.513256   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:23.513492   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:21.640062   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:21.654739   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:21.654799   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:21.701885   47919 cri.go:89] found id: ""
	I0229 19:01:21.701912   47919 logs.go:276] 0 containers: []
	W0229 19:01:21.701921   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:21.701929   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:21.701987   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:21.746736   47919 cri.go:89] found id: ""
	I0229 19:01:21.746767   47919 logs.go:276] 0 containers: []
	W0229 19:01:21.746780   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:21.746787   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:21.746847   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:21.784830   47919 cri.go:89] found id: ""
	I0229 19:01:21.784851   47919 logs.go:276] 0 containers: []
	W0229 19:01:21.784859   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:21.784865   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:21.784911   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:21.824122   47919 cri.go:89] found id: ""
	I0229 19:01:21.824151   47919 logs.go:276] 0 containers: []
	W0229 19:01:21.824162   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:21.824171   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:21.824217   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:21.869937   47919 cri.go:89] found id: ""
	I0229 19:01:21.869967   47919 logs.go:276] 0 containers: []
	W0229 19:01:21.869979   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:21.869986   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:21.870043   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:21.909902   47919 cri.go:89] found id: ""
	I0229 19:01:21.909928   47919 logs.go:276] 0 containers: []
	W0229 19:01:21.909939   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:21.909946   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:21.910005   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:21.953980   47919 cri.go:89] found id: ""
	I0229 19:01:21.954021   47919 logs.go:276] 0 containers: []
	W0229 19:01:21.954033   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:21.954040   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:21.954108   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:21.997483   47919 cri.go:89] found id: ""
	I0229 19:01:21.997510   47919 logs.go:276] 0 containers: []
	W0229 19:01:21.997521   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:21.997531   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:21.997546   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:22.108610   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:22.108639   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:22.153571   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:22.153596   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:22.204525   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:22.204555   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:22.219217   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:22.219241   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:22.294794   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:24.795157   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:24.811292   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:24.811363   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:24.854354   47919 cri.go:89] found id: ""
	I0229 19:01:24.854387   47919 logs.go:276] 0 containers: []
	W0229 19:01:24.854396   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:24.854402   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:24.854455   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:24.890800   47919 cri.go:89] found id: ""
	I0229 19:01:24.890828   47919 logs.go:276] 0 containers: []
	W0229 19:01:24.890838   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:24.890844   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:24.890900   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:24.930961   47919 cri.go:89] found id: ""
	I0229 19:01:24.930983   47919 logs.go:276] 0 containers: []
	W0229 19:01:24.930991   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:24.931001   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:24.931073   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:21.582702   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:23.584665   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:25.185732   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:27.683784   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:25.513886   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:28.016852   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:24.968719   47919 cri.go:89] found id: ""
	I0229 19:01:24.968740   47919 logs.go:276] 0 containers: []
	W0229 19:01:24.968747   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:24.968752   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:24.968809   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:25.012723   47919 cri.go:89] found id: ""
	I0229 19:01:25.012746   47919 logs.go:276] 0 containers: []
	W0229 19:01:25.012756   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:25.012763   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:25.012821   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:25.064388   47919 cri.go:89] found id: ""
	I0229 19:01:25.064412   47919 logs.go:276] 0 containers: []
	W0229 19:01:25.064422   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:25.064435   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:25.064496   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:25.122256   47919 cri.go:89] found id: ""
	I0229 19:01:25.122277   47919 logs.go:276] 0 containers: []
	W0229 19:01:25.122286   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:25.122291   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:25.122335   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:25.165487   47919 cri.go:89] found id: ""
	I0229 19:01:25.165515   47919 logs.go:276] 0 containers: []
	W0229 19:01:25.165526   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:25.165536   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:25.165557   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:25.249294   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:25.249333   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:25.297013   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:25.297048   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:25.346276   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:25.346309   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:25.362604   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:25.362635   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:25.434586   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:27.935727   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:27.950680   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:27.950750   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:27.989253   47919 cri.go:89] found id: ""
	I0229 19:01:27.989282   47919 logs.go:276] 0 containers: []
	W0229 19:01:27.989293   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:27.989300   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:27.989357   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:28.039714   47919 cri.go:89] found id: ""
	I0229 19:01:28.039741   47919 logs.go:276] 0 containers: []
	W0229 19:01:28.039750   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:28.039763   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:28.039828   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:28.102860   47919 cri.go:89] found id: ""
	I0229 19:01:28.102886   47919 logs.go:276] 0 containers: []
	W0229 19:01:28.102897   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:28.102904   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:28.102971   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:28.160075   47919 cri.go:89] found id: ""
	I0229 19:01:28.160097   47919 logs.go:276] 0 containers: []
	W0229 19:01:28.160104   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:28.160110   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:28.160180   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:28.200297   47919 cri.go:89] found id: ""
	I0229 19:01:28.200317   47919 logs.go:276] 0 containers: []
	W0229 19:01:28.200325   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:28.200330   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:28.200393   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:28.239912   47919 cri.go:89] found id: ""
	I0229 19:01:28.239944   47919 logs.go:276] 0 containers: []
	W0229 19:01:28.239955   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:28.239963   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:28.240018   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:28.278525   47919 cri.go:89] found id: ""
	I0229 19:01:28.278550   47919 logs.go:276] 0 containers: []
	W0229 19:01:28.278558   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:28.278564   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:28.278617   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:28.315659   47919 cri.go:89] found id: ""
	I0229 19:01:28.315685   47919 logs.go:276] 0 containers: []
	W0229 19:01:28.315693   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:28.315703   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:28.315716   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:28.330102   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:28.330127   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:28.402474   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:28.402497   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:28.402513   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:28.486271   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:28.486308   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:28.531888   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:28.531918   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:26.083338   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:28.083983   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:30.085481   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:29.684229   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:32.184054   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:30.513642   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:32.514405   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:31.082385   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:31.122771   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:31.122844   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:31.165097   47919 cri.go:89] found id: ""
	I0229 19:01:31.165127   47919 logs.go:276] 0 containers: []
	W0229 19:01:31.165138   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:31.165148   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:31.165215   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:31.209449   47919 cri.go:89] found id: ""
	I0229 19:01:31.209482   47919 logs.go:276] 0 containers: []
	W0229 19:01:31.209492   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:31.209498   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:31.209559   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:31.249660   47919 cri.go:89] found id: ""
	I0229 19:01:31.249687   47919 logs.go:276] 0 containers: []
	W0229 19:01:31.249698   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:31.249705   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:31.249770   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:31.299268   47919 cri.go:89] found id: ""
	I0229 19:01:31.299292   47919 logs.go:276] 0 containers: []
	W0229 19:01:31.299301   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:31.299308   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:31.299363   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:31.339078   47919 cri.go:89] found id: ""
	I0229 19:01:31.339111   47919 logs.go:276] 0 containers: []
	W0229 19:01:31.339123   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:31.339131   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:31.339194   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:31.378548   47919 cri.go:89] found id: ""
	I0229 19:01:31.378576   47919 logs.go:276] 0 containers: []
	W0229 19:01:31.378587   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:31.378595   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:31.378654   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:31.418744   47919 cri.go:89] found id: ""
	I0229 19:01:31.418780   47919 logs.go:276] 0 containers: []
	W0229 19:01:31.418812   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:31.418824   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:31.418889   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:31.464078   47919 cri.go:89] found id: ""
	I0229 19:01:31.464103   47919 logs.go:276] 0 containers: []
	W0229 19:01:31.464113   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:31.464124   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:31.464138   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:31.516406   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:31.516434   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:31.531504   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:31.531527   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:31.607391   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:31.607413   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:31.607426   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:31.691582   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:31.691609   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:34.233205   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:34.250283   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:34.250345   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:34.294588   47919 cri.go:89] found id: ""
	I0229 19:01:34.294620   47919 logs.go:276] 0 containers: []
	W0229 19:01:34.294631   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:34.294639   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:34.294712   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:34.337033   47919 cri.go:89] found id: ""
	I0229 19:01:34.337061   47919 logs.go:276] 0 containers: []
	W0229 19:01:34.337071   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:34.337079   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:34.337141   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:34.382800   47919 cri.go:89] found id: ""
	I0229 19:01:34.382831   47919 logs.go:276] 0 containers: []
	W0229 19:01:34.382840   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:34.382845   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:34.382904   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:34.422931   47919 cri.go:89] found id: ""
	I0229 19:01:34.422959   47919 logs.go:276] 0 containers: []
	W0229 19:01:34.422970   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:34.422977   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:34.423059   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:34.469724   47919 cri.go:89] found id: ""
	I0229 19:01:34.469755   47919 logs.go:276] 0 containers: []
	W0229 19:01:34.469765   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:34.469773   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:34.469824   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:34.513428   47919 cri.go:89] found id: ""
	I0229 19:01:34.513461   47919 logs.go:276] 0 containers: []
	W0229 19:01:34.513472   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:34.513479   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:34.513555   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:34.552593   47919 cri.go:89] found id: ""
	I0229 19:01:34.552638   47919 logs.go:276] 0 containers: []
	W0229 19:01:34.552648   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:34.552655   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:34.552717   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:34.596516   47919 cri.go:89] found id: ""
	I0229 19:01:34.596538   47919 logs.go:276] 0 containers: []
	W0229 19:01:34.596546   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:34.596554   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:34.596568   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:34.611782   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:34.611805   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:34.694333   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:34.694352   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:34.694368   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:34.781638   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:34.781669   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:34.832910   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:34.832943   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:32.584363   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:34.585650   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:34.185025   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:36.683723   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:34.515185   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:37.013287   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:39.014417   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:37.398458   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:37.415617   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:37.415696   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:37.455390   47919 cri.go:89] found id: ""
	I0229 19:01:37.455421   47919 logs.go:276] 0 containers: []
	W0229 19:01:37.455433   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:37.455440   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:37.455501   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:37.498869   47919 cri.go:89] found id: ""
	I0229 19:01:37.498890   47919 logs.go:276] 0 containers: []
	W0229 19:01:37.498901   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:37.498909   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:37.498972   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:37.538928   47919 cri.go:89] found id: ""
	I0229 19:01:37.538952   47919 logs.go:276] 0 containers: []
	W0229 19:01:37.538960   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:37.538966   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:37.539012   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:37.577278   47919 cri.go:89] found id: ""
	I0229 19:01:37.577299   47919 logs.go:276] 0 containers: []
	W0229 19:01:37.577310   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:37.577317   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:37.577372   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:37.620313   47919 cri.go:89] found id: ""
	I0229 19:01:37.620342   47919 logs.go:276] 0 containers: []
	W0229 19:01:37.620352   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:37.620359   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:37.620420   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:37.657696   47919 cri.go:89] found id: ""
	I0229 19:01:37.657717   47919 logs.go:276] 0 containers: []
	W0229 19:01:37.657726   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:37.657734   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:37.657792   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:37.698814   47919 cri.go:89] found id: ""
	I0229 19:01:37.698833   47919 logs.go:276] 0 containers: []
	W0229 19:01:37.698841   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:37.698848   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:37.698902   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:37.736438   47919 cri.go:89] found id: ""
	I0229 19:01:37.736469   47919 logs.go:276] 0 containers: []
	W0229 19:01:37.736480   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:37.736490   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:37.736506   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:37.753849   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:37.753871   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:37.854740   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:37.854764   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:37.854783   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:37.943837   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:37.943872   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:37.988180   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:37.988209   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:37.084353   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:39.582760   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:39.183743   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:41.184218   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:41.014652   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:43.014745   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:40.543133   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:40.558453   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:40.558526   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:40.599794   47919 cri.go:89] found id: ""
	I0229 19:01:40.599814   47919 logs.go:276] 0 containers: []
	W0229 19:01:40.599821   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:40.599827   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:40.599874   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:40.641738   47919 cri.go:89] found id: ""
	I0229 19:01:40.641762   47919 logs.go:276] 0 containers: []
	W0229 19:01:40.641769   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:40.641775   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:40.641819   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:40.683905   47919 cri.go:89] found id: ""
	I0229 19:01:40.683935   47919 logs.go:276] 0 containers: []
	W0229 19:01:40.683945   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:40.683953   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:40.684006   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:40.727645   47919 cri.go:89] found id: ""
	I0229 19:01:40.727675   47919 logs.go:276] 0 containers: []
	W0229 19:01:40.727685   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:40.727693   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:40.727754   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:40.785142   47919 cri.go:89] found id: ""
	I0229 19:01:40.785172   47919 logs.go:276] 0 containers: []
	W0229 19:01:40.785192   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:40.785199   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:40.785252   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:40.854534   47919 cri.go:89] found id: ""
	I0229 19:01:40.854560   47919 logs.go:276] 0 containers: []
	W0229 19:01:40.854571   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:40.854580   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:40.854639   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:40.900823   47919 cri.go:89] found id: ""
	I0229 19:01:40.900851   47919 logs.go:276] 0 containers: []
	W0229 19:01:40.900862   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:40.900869   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:40.900928   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:40.938108   47919 cri.go:89] found id: ""
	I0229 19:01:40.938135   47919 logs.go:276] 0 containers: []
	W0229 19:01:40.938146   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:40.938156   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:40.938171   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:40.987452   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:40.987482   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:41.037388   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:41.037417   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:41.051987   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:41.052015   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:41.126077   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:41.126102   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:41.126116   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:43.715745   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:43.730683   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:43.730755   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:43.790637   47919 cri.go:89] found id: ""
	I0229 19:01:43.790665   47919 logs.go:276] 0 containers: []
	W0229 19:01:43.790676   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:43.790682   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:43.790731   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:43.848237   47919 cri.go:89] found id: ""
	I0229 19:01:43.848263   47919 logs.go:276] 0 containers: []
	W0229 19:01:43.848272   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:43.848277   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:43.848337   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:43.897892   47919 cri.go:89] found id: ""
	I0229 19:01:43.897920   47919 logs.go:276] 0 containers: []
	W0229 19:01:43.897928   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:43.897934   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:43.897989   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:43.936068   47919 cri.go:89] found id: ""
	I0229 19:01:43.936089   47919 logs.go:276] 0 containers: []
	W0229 19:01:43.936097   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:43.936102   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:43.936149   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:43.978636   47919 cri.go:89] found id: ""
	I0229 19:01:43.978670   47919 logs.go:276] 0 containers: []
	W0229 19:01:43.978682   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:43.978689   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:43.978751   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:44.018642   47919 cri.go:89] found id: ""
	I0229 19:01:44.018676   47919 logs.go:276] 0 containers: []
	W0229 19:01:44.018684   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:44.018690   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:44.018737   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:44.056237   47919 cri.go:89] found id: ""
	I0229 19:01:44.056267   47919 logs.go:276] 0 containers: []
	W0229 19:01:44.056278   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:44.056285   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:44.056347   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:44.095489   47919 cri.go:89] found id: ""
	I0229 19:01:44.095522   47919 logs.go:276] 0 containers: []
	W0229 19:01:44.095532   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:44.095543   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:44.095557   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:44.139407   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:44.139433   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:44.189893   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:44.189921   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:44.206426   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:44.206449   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:44.285594   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:44.285621   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:44.285638   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:41.584614   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:44.083599   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:43.185509   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:45.683851   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:47.684064   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:45.015082   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:47.017540   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:46.869271   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:46.885267   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:46.885356   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:46.921696   47919 cri.go:89] found id: ""
	I0229 19:01:46.921718   47919 logs.go:276] 0 containers: []
	W0229 19:01:46.921725   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:46.921731   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:46.921789   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:46.960265   47919 cri.go:89] found id: ""
	I0229 19:01:46.960291   47919 logs.go:276] 0 containers: []
	W0229 19:01:46.960302   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:46.960309   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:46.960367   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:46.998035   47919 cri.go:89] found id: ""
	I0229 19:01:46.998062   47919 logs.go:276] 0 containers: []
	W0229 19:01:46.998070   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:46.998075   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:46.998119   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:47.041563   47919 cri.go:89] found id: ""
	I0229 19:01:47.041586   47919 logs.go:276] 0 containers: []
	W0229 19:01:47.041595   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:47.041600   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:47.041643   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:47.084146   47919 cri.go:89] found id: ""
	I0229 19:01:47.084167   47919 logs.go:276] 0 containers: []
	W0229 19:01:47.084174   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:47.084179   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:47.084227   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:47.126813   47919 cri.go:89] found id: ""
	I0229 19:01:47.126835   47919 logs.go:276] 0 containers: []
	W0229 19:01:47.126845   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:47.126853   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:47.126909   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:47.165379   47919 cri.go:89] found id: ""
	I0229 19:01:47.165399   47919 logs.go:276] 0 containers: []
	W0229 19:01:47.165406   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:47.165412   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:47.165454   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:47.204263   47919 cri.go:89] found id: ""
	I0229 19:01:47.204306   47919 logs.go:276] 0 containers: []
	W0229 19:01:47.204316   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:47.204328   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:47.204345   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:47.248848   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:47.248876   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:47.299388   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:47.299416   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:47.314484   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:47.314507   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:47.386231   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:47.386256   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:47.386272   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:46.084527   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:48.085557   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:50.189188   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:52.684126   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:49.513497   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:51.514191   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:53.515909   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:49.965988   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:49.980621   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:49.980700   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:50.025010   47919 cri.go:89] found id: ""
	I0229 19:01:50.025030   47919 logs.go:276] 0 containers: []
	W0229 19:01:50.025037   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:50.025042   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:50.025090   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:50.066947   47919 cri.go:89] found id: ""
	I0229 19:01:50.066976   47919 logs.go:276] 0 containers: []
	W0229 19:01:50.066984   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:50.066990   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:50.067061   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:50.108892   47919 cri.go:89] found id: ""
	I0229 19:01:50.108913   47919 logs.go:276] 0 containers: []
	W0229 19:01:50.108931   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:50.108937   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:50.108997   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:50.149601   47919 cri.go:89] found id: ""
	I0229 19:01:50.149626   47919 logs.go:276] 0 containers: []
	W0229 19:01:50.149636   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:50.149643   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:50.149704   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:50.191881   47919 cri.go:89] found id: ""
	I0229 19:01:50.191908   47919 logs.go:276] 0 containers: []
	W0229 19:01:50.191918   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:50.191925   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:50.191987   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:50.233782   47919 cri.go:89] found id: ""
	I0229 19:01:50.233803   47919 logs.go:276] 0 containers: []
	W0229 19:01:50.233811   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:50.233816   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:50.233870   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:50.274913   47919 cri.go:89] found id: ""
	I0229 19:01:50.274941   47919 logs.go:276] 0 containers: []
	W0229 19:01:50.274950   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:50.274955   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:50.275050   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:50.321924   47919 cri.go:89] found id: ""
	I0229 19:01:50.321945   47919 logs.go:276] 0 containers: []
	W0229 19:01:50.321953   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:50.321967   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:50.321978   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:50.367357   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:50.367388   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:50.417229   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:50.417260   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:50.432031   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:50.432056   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:50.504920   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:50.504942   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:50.504960   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:53.110884   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:53.126947   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:53.127004   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:53.166940   47919 cri.go:89] found id: ""
	I0229 19:01:53.166965   47919 logs.go:276] 0 containers: []
	W0229 19:01:53.166975   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:53.166982   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:53.167054   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:53.205917   47919 cri.go:89] found id: ""
	I0229 19:01:53.205960   47919 logs.go:276] 0 containers: []
	W0229 19:01:53.205968   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:53.205974   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:53.206030   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:53.245547   47919 cri.go:89] found id: ""
	I0229 19:01:53.245577   47919 logs.go:276] 0 containers: []
	W0229 19:01:53.245587   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:53.245595   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:53.245654   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:53.287513   47919 cri.go:89] found id: ""
	I0229 19:01:53.287540   47919 logs.go:276] 0 containers: []
	W0229 19:01:53.287550   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:53.287557   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:53.287617   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:53.329269   47919 cri.go:89] found id: ""
	I0229 19:01:53.329299   47919 logs.go:276] 0 containers: []
	W0229 19:01:53.329310   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:53.329318   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:53.329379   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:53.377438   47919 cri.go:89] found id: ""
	I0229 19:01:53.377467   47919 logs.go:276] 0 containers: []
	W0229 19:01:53.377478   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:53.377485   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:53.377549   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:53.418414   47919 cri.go:89] found id: ""
	I0229 19:01:53.418440   47919 logs.go:276] 0 containers: []
	W0229 19:01:53.418448   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:53.418453   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:53.418514   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:53.458365   47919 cri.go:89] found id: ""
	I0229 19:01:53.458393   47919 logs.go:276] 0 containers: []
	W0229 19:01:53.458402   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:53.458409   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:53.458421   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:53.540710   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:53.540744   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:53.637271   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:53.637302   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:53.687822   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:53.687850   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:53.703482   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:53.703506   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:53.779564   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:50.584198   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:53.082170   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:55.082683   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:54.685554   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:56.685951   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:56.013441   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:58.016917   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:56.280300   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:56.295210   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:56.295295   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:56.336903   47919 cri.go:89] found id: ""
	I0229 19:01:56.336935   47919 logs.go:276] 0 containers: []
	W0229 19:01:56.336945   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:56.336953   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:56.337002   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:56.373300   47919 cri.go:89] found id: ""
	I0229 19:01:56.373322   47919 logs.go:276] 0 containers: []
	W0229 19:01:56.373330   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:56.373338   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:56.373390   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:56.411949   47919 cri.go:89] found id: ""
	I0229 19:01:56.411975   47919 logs.go:276] 0 containers: []
	W0229 19:01:56.411984   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:56.411990   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:56.412047   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:56.453302   47919 cri.go:89] found id: ""
	I0229 19:01:56.453329   47919 logs.go:276] 0 containers: []
	W0229 19:01:56.453339   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:56.453344   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:56.453403   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:56.490543   47919 cri.go:89] found id: ""
	I0229 19:01:56.490565   47919 logs.go:276] 0 containers: []
	W0229 19:01:56.490576   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:56.490582   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:56.490637   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:56.547078   47919 cri.go:89] found id: ""
	I0229 19:01:56.547101   47919 logs.go:276] 0 containers: []
	W0229 19:01:56.547108   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:56.547113   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:56.547171   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:56.598382   47919 cri.go:89] found id: ""
	I0229 19:01:56.598408   47919 logs.go:276] 0 containers: []
	W0229 19:01:56.598417   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:56.598424   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:56.598478   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:56.646090   47919 cri.go:89] found id: ""
	I0229 19:01:56.646117   47919 logs.go:276] 0 containers: []
	W0229 19:01:56.646125   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:56.646134   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:56.646145   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:01:56.691685   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:01:56.691711   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:01:56.742886   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:01:56.742927   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:01:56.758326   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:56.758350   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:56.830140   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:56.830160   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:56.830177   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:59.414437   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:01:59.429710   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:01:59.429793   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:01:59.473993   47919 cri.go:89] found id: ""
	I0229 19:01:59.474018   47919 logs.go:276] 0 containers: []
	W0229 19:01:59.474025   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:01:59.474031   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:01:59.474091   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:01:59.529114   47919 cri.go:89] found id: ""
	I0229 19:01:59.529143   47919 logs.go:276] 0 containers: []
	W0229 19:01:59.529157   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:01:59.529164   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:01:59.529222   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:01:59.596624   47919 cri.go:89] found id: ""
	I0229 19:01:59.596654   47919 logs.go:276] 0 containers: []
	W0229 19:01:59.596665   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:01:59.596672   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:01:59.596730   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:01:59.641088   47919 cri.go:89] found id: ""
	I0229 19:01:59.641118   47919 logs.go:276] 0 containers: []
	W0229 19:01:59.641130   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:01:59.641138   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:01:59.641198   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:01:59.682294   47919 cri.go:89] found id: ""
	I0229 19:01:59.682318   47919 logs.go:276] 0 containers: []
	W0229 19:01:59.682327   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:01:59.682333   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:01:59.682406   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:01:59.722881   47919 cri.go:89] found id: ""
	I0229 19:01:59.722902   47919 logs.go:276] 0 containers: []
	W0229 19:01:59.722910   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:01:59.722915   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:01:59.722982   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:01:59.761727   47919 cri.go:89] found id: ""
	I0229 19:01:59.761757   47919 logs.go:276] 0 containers: []
	W0229 19:01:59.761767   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:01:59.761778   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:01:59.761839   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:01:59.805733   47919 cri.go:89] found id: ""
	I0229 19:01:59.805762   47919 logs.go:276] 0 containers: []
	W0229 19:01:59.805772   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:01:59.805783   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:01:59.805798   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:01:59.883702   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:01:59.883721   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:01:59.883733   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:01:57.083166   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:59.085841   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:59.183892   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:01.184393   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:00.513790   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:03.013807   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:01:59.960649   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:01:59.960682   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:02:00.012085   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:02:00.012121   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:02:00.065794   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:02:00.065834   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:02:02.583319   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:02:02.603123   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:02:02.603178   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:02:02.654992   47919 cri.go:89] found id: ""
	I0229 19:02:02.655017   47919 logs.go:276] 0 containers: []
	W0229 19:02:02.655046   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:02:02.655053   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:02:02.655103   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:02:02.697067   47919 cri.go:89] found id: ""
	I0229 19:02:02.697098   47919 logs.go:276] 0 containers: []
	W0229 19:02:02.697109   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:02:02.697116   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:02:02.697178   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:02:02.734804   47919 cri.go:89] found id: ""
	I0229 19:02:02.734828   47919 logs.go:276] 0 containers: []
	W0229 19:02:02.734835   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:02:02.734841   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:02:02.734893   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:02:02.778292   47919 cri.go:89] found id: ""
	I0229 19:02:02.778313   47919 logs.go:276] 0 containers: []
	W0229 19:02:02.778321   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:02:02.778328   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:02:02.778382   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:02:02.819431   47919 cri.go:89] found id: ""
	I0229 19:02:02.819458   47919 logs.go:276] 0 containers: []
	W0229 19:02:02.819470   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:02:02.819478   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:02:02.819537   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:02:02.862409   47919 cri.go:89] found id: ""
	I0229 19:02:02.862432   47919 logs.go:276] 0 containers: []
	W0229 19:02:02.862439   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:02:02.862445   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:02:02.862487   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:02:02.902486   47919 cri.go:89] found id: ""
	I0229 19:02:02.902513   47919 logs.go:276] 0 containers: []
	W0229 19:02:02.902521   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:02:02.902526   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:02:02.902571   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:02:02.939408   47919 cri.go:89] found id: ""
	I0229 19:02:02.939436   47919 logs.go:276] 0 containers: []
	W0229 19:02:02.939443   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:02:02.939451   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:02:02.939462   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:02:02.954539   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:02:02.954564   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:02:03.032534   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:02:03.032556   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:02:03.032574   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:02:03.116064   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:02:03.116096   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:02:03.167242   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:02:03.167265   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:02:01.582557   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:03.583876   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:04.576948   47608 pod_ready.go:81] duration metric: took 4m0.00105469s waiting for pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace to be "Ready" ...
	E0229 19:02:04.576996   47608 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-5w6c9" in "kube-system" namespace to be "Ready" (will not retry!)
	I0229 19:02:04.577015   47608 pod_ready.go:38] duration metric: took 4m12.91384632s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 19:02:04.577039   47608 kubeadm.go:640] restartCluster took 4m30.900514081s
	W0229 19:02:04.577101   47608 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0229 19:02:04.577137   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0229 19:02:03.684074   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:05.686050   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:07.686409   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:05.014368   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:07.518556   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:05.718312   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:02:05.732879   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:02:05.733012   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:02:05.774525   47919 cri.go:89] found id: ""
	I0229 19:02:05.774557   47919 logs.go:276] 0 containers: []
	W0229 19:02:05.774569   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:02:05.774577   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:02:05.774640   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:02:05.817870   47919 cri.go:89] found id: ""
	I0229 19:02:05.817900   47919 logs.go:276] 0 containers: []
	W0229 19:02:05.817912   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:02:05.817919   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:02:05.817998   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:02:05.859533   47919 cri.go:89] found id: ""
	I0229 19:02:05.859565   47919 logs.go:276] 0 containers: []
	W0229 19:02:05.859579   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:02:05.859587   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:02:05.859646   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:02:05.904971   47919 cri.go:89] found id: ""
	I0229 19:02:05.905003   47919 logs.go:276] 0 containers: []
	W0229 19:02:05.905014   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:02:05.905021   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:02:05.905086   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:02:05.950431   47919 cri.go:89] found id: ""
	I0229 19:02:05.950459   47919 logs.go:276] 0 containers: []
	W0229 19:02:05.950470   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:02:05.950478   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:02:05.950546   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:02:05.999464   47919 cri.go:89] found id: ""
	I0229 19:02:05.999489   47919 logs.go:276] 0 containers: []
	W0229 19:02:05.999500   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:02:05.999508   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:02:05.999588   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:02:06.045086   47919 cri.go:89] found id: ""
	I0229 19:02:06.045117   47919 logs.go:276] 0 containers: []
	W0229 19:02:06.045133   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:02:06.045140   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:02:06.045203   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:02:06.091542   47919 cri.go:89] found id: ""
	I0229 19:02:06.091571   47919 logs.go:276] 0 containers: []
	W0229 19:02:06.091583   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:02:06.091592   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:02:06.091607   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:02:06.156524   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:02:06.156558   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:02:06.174941   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:02:06.174965   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:02:06.260443   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:02:06.260467   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:02:06.260483   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:02:06.377415   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:02:06.377457   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:02:08.931407   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:02:08.946035   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:02:08.946108   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:02:08.989299   47919 cri.go:89] found id: ""
	I0229 19:02:08.989326   47919 logs.go:276] 0 containers: []
	W0229 19:02:08.989338   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:02:08.989345   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:02:08.989405   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:02:09.033634   47919 cri.go:89] found id: ""
	I0229 19:02:09.033664   47919 logs.go:276] 0 containers: []
	W0229 19:02:09.033677   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:02:09.033684   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:02:09.033745   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:02:09.084381   47919 cri.go:89] found id: ""
	I0229 19:02:09.084406   47919 logs.go:276] 0 containers: []
	W0229 19:02:09.084435   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:02:09.084442   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:02:09.084507   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:02:09.132526   47919 cri.go:89] found id: ""
	I0229 19:02:09.132555   47919 logs.go:276] 0 containers: []
	W0229 19:02:09.132573   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:02:09.132581   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:02:09.132644   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:02:09.182655   47919 cri.go:89] found id: ""
	I0229 19:02:09.182684   47919 logs.go:276] 0 containers: []
	W0229 19:02:09.182694   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:02:09.182701   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:02:09.182764   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:02:09.223164   47919 cri.go:89] found id: ""
	I0229 19:02:09.223191   47919 logs.go:276] 0 containers: []
	W0229 19:02:09.223202   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:02:09.223210   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:02:09.223267   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:02:09.271882   47919 cri.go:89] found id: ""
	I0229 19:02:09.271908   47919 logs.go:276] 0 containers: []
	W0229 19:02:09.271926   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:02:09.271934   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:02:09.271992   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:02:09.331796   47919 cri.go:89] found id: ""
	I0229 19:02:09.331826   47919 logs.go:276] 0 containers: []
	W0229 19:02:09.331837   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:02:09.331847   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:02:09.331860   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:02:09.398969   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:02:09.399009   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:02:09.418992   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:02:09.419040   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:02:09.503358   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:02:09.503381   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:02:09.503394   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:02:09.612549   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:02:09.612586   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:02:10.184741   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:12.685204   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:10.024230   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:12.513343   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:12.162138   47919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:02:12.175827   47919 kubeadm.go:640] restartCluster took 4m14.562960798s
	W0229 19:02:12.175902   47919 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0229 19:02:12.175940   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0229 19:02:12.639231   47919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 19:02:12.658353   47919 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 19:02:12.671552   47919 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 19:02:12.684278   47919 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 19:02:12.684323   47919 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 19:02:12.903644   47919 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 19:02:15.184189   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:17.184275   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:14.517015   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:17.015195   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:19.184474   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:21.184737   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:19.513735   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:22.016650   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:23.185852   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:25.685744   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:24.516493   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:26.519091   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:29.013740   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:28.184960   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:30.685098   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:31.013808   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:33.514912   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:37.055439   47608 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.47828283s)
	I0229 19:02:37.055501   47608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 19:02:37.077296   47608 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 19:02:37.089984   47608 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 19:02:37.100332   47608 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 19:02:37.100379   47608 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0229 19:02:37.156153   47608 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0229 19:02:37.156243   47608 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 19:02:37.317040   47608 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 19:02:37.317142   47608 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 19:02:37.317220   47608 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 19:02:37.551800   47608 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 19:02:33.184422   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:35.686104   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:37.553918   47608 out.go:204]   - Generating certificates and keys ...
	I0229 19:02:37.554019   47608 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 19:02:37.554099   47608 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 19:02:37.554197   47608 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 19:02:37.554271   47608 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 19:02:37.554545   47608 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 19:02:37.555258   47608 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 19:02:37.555792   47608 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 19:02:37.556150   47608 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 19:02:37.556697   47608 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 19:02:37.557215   47608 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 19:02:37.557744   47608 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 19:02:37.557835   47608 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 19:02:37.725663   47608 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 19:02:37.801114   47608 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 19:02:37.971825   47608 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 19:02:38.081281   47608 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 19:02:38.081986   47608 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 19:02:38.086435   47608 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 19:02:36.013356   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:38.014838   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:38.088264   47608 out.go:204]   - Booting up control plane ...
	I0229 19:02:38.088353   47608 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 19:02:38.088442   47608 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 19:02:38.088533   47608 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 19:02:38.106686   47608 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 19:02:38.107606   47608 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 19:02:38.107671   47608 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0229 19:02:38.264387   47608 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 19:02:38.185682   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:40.684963   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:40.014933   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:42.016282   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:44.768315   47608 kubeadm.go:322] [apiclient] All control plane components are healthy after 6.503831 seconds
	I0229 19:02:44.768482   47608 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0229 19:02:44.786115   47608 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0229 19:02:45.321509   47608 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0229 19:02:45.321785   47608 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-991128 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0229 19:02:45.834905   47608 kubeadm.go:322] [bootstrap-token] Using token: 53x4pg.x71etkalcz6sdqmq
	I0229 19:02:45.836192   47608 out.go:204]   - Configuring RBAC rules ...
	I0229 19:02:45.836319   47608 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0229 19:02:45.843486   47608 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0229 19:02:45.854690   47608 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0229 19:02:45.866571   47608 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0229 19:02:45.870812   47608 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0229 19:02:45.874413   47608 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0229 19:02:45.891377   47608 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0229 19:02:46.190541   47608 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0229 19:02:46.251452   47608 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0229 19:02:46.254418   47608 kubeadm.go:322] 
	I0229 19:02:46.254529   47608 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0229 19:02:46.254552   47608 kubeadm.go:322] 
	I0229 19:02:46.254653   47608 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0229 19:02:46.254663   47608 kubeadm.go:322] 
	I0229 19:02:46.254693   47608 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0229 19:02:46.254777   47608 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0229 19:02:46.254843   47608 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0229 19:02:46.254856   47608 kubeadm.go:322] 
	I0229 19:02:46.254932   47608 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0229 19:02:46.254949   47608 kubeadm.go:322] 
	I0229 19:02:46.255010   47608 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0229 19:02:46.255035   47608 kubeadm.go:322] 
	I0229 19:02:46.255115   47608 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0229 19:02:46.255219   47608 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0229 19:02:46.255288   47608 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0229 19:02:46.255298   47608 kubeadm.go:322] 
	I0229 19:02:46.255366   47608 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0229 19:02:46.255456   47608 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0229 19:02:46.255469   47608 kubeadm.go:322] 
	I0229 19:02:46.255574   47608 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 53x4pg.x71etkalcz6sdqmq \
	I0229 19:02:46.255704   47608 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:aeecb2c9da879b7c90530698bac219eb9b0e3de8de59b6b65ba867f984e81782 \
	I0229 19:02:46.255726   47608 kubeadm.go:322] 	--control-plane 
	I0229 19:02:46.255730   47608 kubeadm.go:322] 
	I0229 19:02:46.255838   47608 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0229 19:02:46.255850   47608 kubeadm.go:322] 
	I0229 19:02:46.255951   47608 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 53x4pg.x71etkalcz6sdqmq \
	I0229 19:02:46.256097   47608 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:aeecb2c9da879b7c90530698bac219eb9b0e3de8de59b6b65ba867f984e81782 
	I0229 19:02:46.261669   47608 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 19:02:46.264240   47608 cni.go:84] Creating CNI manager for ""
	I0229 19:02:46.264255   47608 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 19:02:46.266874   47608 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 19:02:43.185008   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:45.685480   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:44.515334   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:47.014269   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:48.006787   48088 pod_ready.go:81] duration metric: took 4m0.000159724s waiting for pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace to be "Ready" ...
	E0229 19:02:48.006810   48088 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-226bj" in "kube-system" namespace to be "Ready" (will not retry!)
	I0229 19:02:48.006828   48088 pod_ready.go:38] duration metric: took 4m13.055720974s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 19:02:48.006852   48088 kubeadm.go:640] restartCluster took 4m30.764284147s
	W0229 19:02:48.006932   48088 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0229 19:02:48.006958   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0229 19:02:46.268155   47608 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 19:02:46.302630   47608 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 19:02:46.363238   47608 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 19:02:46.363314   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:46.363332   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f9e09574fdbf0719156a1e892f7aeb8b71f0cf19 minikube.k8s.io/name=embed-certs-991128 minikube.k8s.io/updated_at=2024_02_29T19_02_46_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:46.429324   47608 ops.go:34] apiserver oom_adj: -16
	I0229 19:02:46.736245   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:47.236707   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:47.736427   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:48.236379   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:48.736599   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:49.236640   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:49.736492   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:50.237145   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:48.184252   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:50.185542   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:52.683769   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:50.736510   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:51.236643   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:51.736840   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:52.236378   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:52.736992   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:53.236672   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:53.736958   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:54.236590   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:54.736323   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:55.237218   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:55.184845   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:57.685255   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:02:55.736774   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:56.236342   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:56.736380   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:57.236930   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:57.737100   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:58.237031   47608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:02:58.387963   47608 kubeadm.go:1088] duration metric: took 12.024710189s to wait for elevateKubeSystemPrivileges.
	I0229 19:02:58.388004   47608 kubeadm.go:406] StartCluster complete in 5m24.764885945s
	I0229 19:02:58.388027   47608 settings.go:142] acquiring lock: {Name:mk2120f70b8c0f8e9d58905a579415af500b3723 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:02:58.388120   47608 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18259-6428/kubeconfig
	I0229 19:02:58.390675   47608 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/kubeconfig: {Name:mk7125f243525b7f0feb85371d9c568ed8e0cf7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:02:58.390953   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 19:02:58.391045   47608 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 19:02:58.391123   47608 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-991128"
	I0229 19:02:58.391146   47608 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-991128"
	W0229 19:02:58.391154   47608 addons.go:243] addon storage-provisioner should already be in state true
	I0229 19:02:58.391154   47608 config.go:182] Loaded profile config "embed-certs-991128": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 19:02:58.391203   47608 host.go:66] Checking if "embed-certs-991128" exists ...
	I0229 19:02:58.391204   47608 addons.go:69] Setting default-storageclass=true in profile "embed-certs-991128"
	I0229 19:02:58.391244   47608 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-991128"
	I0229 19:02:58.391596   47608 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:02:58.391624   47608 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:02:58.391698   47608 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:02:58.391718   47608 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:02:58.391204   47608 addons.go:69] Setting metrics-server=true in profile "embed-certs-991128"
	I0229 19:02:58.391948   47608 addons.go:234] Setting addon metrics-server=true in "embed-certs-991128"
	W0229 19:02:58.391957   47608 addons.go:243] addon metrics-server should already be in state true
	I0229 19:02:58.391993   47608 host.go:66] Checking if "embed-certs-991128" exists ...
	I0229 19:02:58.392356   47608 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:02:58.392387   47608 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:02:58.409953   47608 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40911
	I0229 19:02:58.409972   47608 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34647
	I0229 19:02:58.410460   47608 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:02:58.410478   47608 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:02:58.411005   47608 main.go:141] libmachine: Using API Version  1
	I0229 19:02:58.411018   47608 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:02:58.411018   47608 main.go:141] libmachine: Using API Version  1
	I0229 19:02:58.411048   47608 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:02:58.411360   47608 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41753
	I0229 19:02:58.411529   47608 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:02:58.411534   47608 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:02:58.411740   47608 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:02:58.411752   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetState
	I0229 19:02:58.412075   47608 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:02:58.412114   47608 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:02:58.412144   47608 main.go:141] libmachine: Using API Version  1
	I0229 19:02:58.412164   47608 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:02:58.412662   47608 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:02:58.413148   47608 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:02:58.413178   47608 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:02:58.415173   47608 addons.go:234] Setting addon default-storageclass=true in "embed-certs-991128"
	W0229 19:02:58.415195   47608 addons.go:243] addon default-storageclass should already be in state true
	I0229 19:02:58.415222   47608 host.go:66] Checking if "embed-certs-991128" exists ...
	I0229 19:02:58.415608   47608 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:02:58.415638   47608 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:02:58.429891   47608 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42057
	I0229 19:02:58.430108   47608 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37441
	I0229 19:02:58.430343   47608 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:02:58.430782   47608 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:02:58.431278   47608 main.go:141] libmachine: Using API Version  1
	I0229 19:02:58.431299   47608 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:02:58.431355   47608 main.go:141] libmachine: Using API Version  1
	I0229 19:02:58.431369   47608 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:02:58.431662   47608 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:02:58.431720   47608 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:02:58.432048   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetState
	I0229 19:02:58.432471   47608 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:02:58.432497   47608 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:02:58.432570   47608 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46057
	I0229 19:02:58.432926   47608 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:02:58.433593   47608 main.go:141] libmachine: Using API Version  1
	I0229 19:02:58.433611   47608 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:02:58.433700   47608 main.go:141] libmachine: (embed-certs-991128) Calling .DriverName
	I0229 19:02:58.436201   47608 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0229 19:02:58.434375   47608 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:02:58.437531   47608 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0229 19:02:58.437549   47608 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0229 19:02:58.437568   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHHostname
	I0229 19:02:58.436414   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetState
	I0229 19:02:58.440191   47608 main.go:141] libmachine: (embed-certs-991128) Calling .DriverName
	I0229 19:02:58.441799   47608 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 19:02:58.440820   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 19:02:58.441382   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHPort
	I0229 19:02:58.443189   47608 main.go:141] libmachine: (embed-certs-991128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:76:e2", ip: ""} in network mk-embed-certs-991128: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:29 +0000 UTC Type:0 Mac:52:54:00:44:76:e2 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:embed-certs-991128 Clientid:01:52:54:00:44:76:e2}
	I0229 19:02:58.443204   47608 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 19:02:58.443216   47608 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 19:02:58.443228   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHHostname
	I0229 19:02:58.443226   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined IP address 192.168.61.34 and MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 19:02:58.443288   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHKeyPath
	I0229 19:02:58.443399   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHUsername
	I0229 19:02:58.443538   47608 sshutil.go:53] new ssh client: &{IP:192.168.61.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/embed-certs-991128/id_rsa Username:docker}
	I0229 19:02:58.446253   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 19:02:58.446573   47608 main.go:141] libmachine: (embed-certs-991128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:76:e2", ip: ""} in network mk-embed-certs-991128: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:29 +0000 UTC Type:0 Mac:52:54:00:44:76:e2 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:embed-certs-991128 Clientid:01:52:54:00:44:76:e2}
	I0229 19:02:58.446840   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined IP address 192.168.61.34 and MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 19:02:58.446885   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHPort
	I0229 19:02:58.447103   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHKeyPath
	I0229 19:02:58.447250   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHUsername
	I0229 19:02:58.447399   47608 sshutil.go:53] new ssh client: &{IP:192.168.61.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/embed-certs-991128/id_rsa Username:docker}
	I0229 19:02:58.449854   47608 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41629
	I0229 19:02:58.450308   47608 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:02:58.450842   47608 main.go:141] libmachine: Using API Version  1
	I0229 19:02:58.450862   47608 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:02:58.451215   47608 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:02:58.452123   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetState
	I0229 19:02:58.453574   47608 main.go:141] libmachine: (embed-certs-991128) Calling .DriverName
	I0229 19:02:58.453805   47608 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 19:02:58.453822   47608 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 19:02:58.453836   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHHostname
	I0229 19:02:58.456718   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 19:02:58.457141   47608 main.go:141] libmachine: (embed-certs-991128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:76:e2", ip: ""} in network mk-embed-certs-991128: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:29 +0000 UTC Type:0 Mac:52:54:00:44:76:e2 Iaid: IPaddr:192.168.61.34 Prefix:24 Hostname:embed-certs-991128 Clientid:01:52:54:00:44:76:e2}
	I0229 19:02:58.457198   47608 main.go:141] libmachine: (embed-certs-991128) DBG | domain embed-certs-991128 has defined IP address 192.168.61.34 and MAC address 52:54:00:44:76:e2 in network mk-embed-certs-991128
	I0229 19:02:58.457301   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHPort
	I0229 19:02:58.457891   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHKeyPath
	I0229 19:02:58.458055   47608 main.go:141] libmachine: (embed-certs-991128) Calling .GetSSHUsername
	I0229 19:02:58.458208   47608 sshutil.go:53] new ssh client: &{IP:192.168.61.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/embed-certs-991128/id_rsa Username:docker}
	I0229 19:02:58.622646   47608 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 19:02:58.666581   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0229 19:02:58.680294   47608 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0229 19:02:58.680319   47608 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0229 19:02:58.701182   47608 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 19:02:58.826426   47608 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0229 19:02:58.826454   47608 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0229 19:02:58.896074   47608 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-991128" context rescaled to 1 replicas
	I0229 19:02:58.896112   47608 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.34 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0229 19:02:58.897987   47608 out.go:177] * Verifying Kubernetes components...
	I0229 19:02:58.899307   47608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 19:02:58.943695   47608 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 19:02:58.943719   47608 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0229 19:02:59.111473   47608 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 19:03:00.514730   47608 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.892048484s)
	I0229 19:03:00.514786   47608 main.go:141] libmachine: Making call to close driver server
	I0229 19:03:00.514797   47608 main.go:141] libmachine: (embed-certs-991128) Calling .Close
	I0229 19:03:00.515119   47608 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:03:00.515140   47608 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:03:00.515155   47608 main.go:141] libmachine: Making call to close driver server
	I0229 19:03:00.515151   47608 main.go:141] libmachine: (embed-certs-991128) DBG | Closing plugin on server side
	I0229 19:03:00.515163   47608 main.go:141] libmachine: (embed-certs-991128) Calling .Close
	I0229 19:03:00.515407   47608 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:03:00.515422   47608 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:03:00.525724   47608 main.go:141] libmachine: Making call to close driver server
	I0229 19:03:00.525747   47608 main.go:141] libmachine: (embed-certs-991128) Calling .Close
	I0229 19:03:00.526016   47608 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:03:00.526034   47608 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:03:00.526058   47608 main.go:141] libmachine: (embed-certs-991128) DBG | Closing plugin on server side
	I0229 19:03:00.549463   47608 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.882844212s)
	I0229 19:03:00.549496   47608 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0229 19:03:01.032296   47608 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.331073482s)
	I0229 19:03:01.032299   47608 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.132962021s)
	I0229 19:03:01.032378   47608 node_ready.go:35] waiting up to 6m0s for node "embed-certs-991128" to be "Ready" ...
	I0229 19:03:01.032351   47608 main.go:141] libmachine: Making call to close driver server
	I0229 19:03:01.032449   47608 main.go:141] libmachine: (embed-certs-991128) Calling .Close
	I0229 19:03:01.032776   47608 main.go:141] libmachine: (embed-certs-991128) DBG | Closing plugin on server side
	I0229 19:03:01.032863   47608 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:03:01.032884   47608 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:03:01.032912   47608 main.go:141] libmachine: Making call to close driver server
	I0229 19:03:01.032929   47608 main.go:141] libmachine: (embed-certs-991128) Calling .Close
	I0229 19:03:01.033250   47608 main.go:141] libmachine: (embed-certs-991128) DBG | Closing plugin on server side
	I0229 19:03:01.033294   47608 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:03:01.033313   47608 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:03:01.054533   47608 node_ready.go:49] node "embed-certs-991128" has status "Ready":"True"
	I0229 19:03:01.054561   47608 node_ready.go:38] duration metric: took 22.162376ms waiting for node "embed-certs-991128" to be "Ready" ...
	I0229 19:03:01.054574   47608 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 19:03:01.073737   47608 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.962221621s)
	I0229 19:03:01.073792   47608 main.go:141] libmachine: Making call to close driver server
	I0229 19:03:01.073807   47608 main.go:141] libmachine: (embed-certs-991128) Calling .Close
	I0229 19:03:01.074112   47608 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:03:01.074134   47608 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:03:01.074144   47608 main.go:141] libmachine: Making call to close driver server
	I0229 19:03:01.074152   47608 main.go:141] libmachine: (embed-certs-991128) Calling .Close
	I0229 19:03:01.074378   47608 main.go:141] libmachine: (embed-certs-991128) DBG | Closing plugin on server side
	I0229 19:03:01.074414   47608 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:03:01.074423   47608 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:03:01.074438   47608 addons.go:470] Verifying addon metrics-server=true in "embed-certs-991128"
	I0229 19:03:01.076668   47608 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0229 19:03:00.186003   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:02.684214   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:01.077896   47608 addons.go:505] enable addons completed in 2.686848059s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0229 19:03:01.090039   47608 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-nth8z" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:01.101161   47608 pod_ready.go:92] pod "coredns-5dd5756b68-nth8z" in "kube-system" namespace has status "Ready":"True"
	I0229 19:03:01.101188   47608 pod_ready.go:81] duration metric: took 11.117889ms waiting for pod "coredns-5dd5756b68-nth8z" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:01.101200   47608 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-991128" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:01.106035   47608 pod_ready.go:92] pod "etcd-embed-certs-991128" in "kube-system" namespace has status "Ready":"True"
	I0229 19:03:01.106059   47608 pod_ready.go:81] duration metric: took 4.853039ms waiting for pod "etcd-embed-certs-991128" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:01.106069   47608 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-991128" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:01.112716   47608 pod_ready.go:92] pod "kube-apiserver-embed-certs-991128" in "kube-system" namespace has status "Ready":"True"
	I0229 19:03:01.112741   47608 pod_ready.go:81] duration metric: took 6.663364ms waiting for pod "kube-apiserver-embed-certs-991128" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:01.112753   47608 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-991128" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:01.117682   47608 pod_ready.go:92] pod "kube-controller-manager-embed-certs-991128" in "kube-system" namespace has status "Ready":"True"
	I0229 19:03:01.117712   47608 pod_ready.go:81] duration metric: took 4.950508ms waiting for pod "kube-controller-manager-embed-certs-991128" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:01.117723   47608 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5grst" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:01.449759   47608 pod_ready.go:92] pod "kube-proxy-5grst" in "kube-system" namespace has status "Ready":"True"
	I0229 19:03:01.449780   47608 pod_ready.go:81] duration metric: took 332.0508ms waiting for pod "kube-proxy-5grst" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:01.449789   47608 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-991128" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:01.837609   47608 pod_ready.go:92] pod "kube-scheduler-embed-certs-991128" in "kube-system" namespace has status "Ready":"True"
	I0229 19:03:01.837633   47608 pod_ready.go:81] duration metric: took 387.837788ms waiting for pod "kube-scheduler-embed-certs-991128" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:01.837641   47608 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:03.844755   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:05.183456   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:07.184892   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:05.844890   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:07.845609   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:09.185625   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:11.683928   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:10.345767   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:12.346373   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:14.844773   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:13.684321   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:16.184064   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:16.845609   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:19.346873   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:18.185564   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:20.685386   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:20.199795   48088 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.19281949s)
	I0229 19:03:20.199858   48088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 19:03:20.217490   48088 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 19:03:20.230760   48088 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 19:03:20.243524   48088 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 19:03:20.243561   48088 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0229 19:03:20.456117   48088 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 19:03:21.845081   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:23.845701   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:23.184306   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:25.185094   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:25.677354   47515 pod_ready.go:81] duration metric: took 4m0.000327645s waiting for pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace to be "Ready" ...
	E0229 19:03:25.677385   47515 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-ggf8f" in "kube-system" namespace to be "Ready" (will not retry!)
	I0229 19:03:25.677415   47515 pod_ready.go:38] duration metric: took 4m14.05174509s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 19:03:25.677440   47515 kubeadm.go:640] restartCluster took 4m31.88709285s
	W0229 19:03:25.677495   47515 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0229 19:03:25.677520   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0229 19:03:29.090699   48088 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0229 19:03:29.090795   48088 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 19:03:29.090912   48088 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 19:03:29.091058   48088 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 19:03:29.091185   48088 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 19:03:29.091273   48088 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 19:03:29.092712   48088 out.go:204]   - Generating certificates and keys ...
	I0229 19:03:29.092825   48088 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 19:03:29.092914   48088 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 19:03:29.093021   48088 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 19:03:29.093110   48088 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 19:03:29.093199   48088 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 19:03:29.093273   48088 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 19:03:29.093353   48088 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 19:03:29.093430   48088 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 19:03:29.093523   48088 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 19:03:29.093617   48088 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 19:03:29.093668   48088 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 19:03:29.093741   48088 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 19:03:29.093811   48088 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 19:03:29.093880   48088 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 19:03:29.093962   48088 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 19:03:29.094031   48088 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 19:03:29.094133   48088 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 19:03:29.094211   48088 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 19:03:29.095825   48088 out.go:204]   - Booting up control plane ...
	I0229 19:03:29.095939   48088 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 19:03:29.096048   48088 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 19:03:29.096154   48088 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 19:03:29.096322   48088 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 19:03:29.096423   48088 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 19:03:29.096489   48088 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0229 19:03:29.096694   48088 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 19:03:29.096769   48088 kubeadm.go:322] [apiclient] All control plane components are healthy after 6.003591 seconds
	I0229 19:03:29.096853   48088 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0229 19:03:29.096951   48088 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0229 19:03:29.097006   48088 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0229 19:03:29.097158   48088 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-153528 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0229 19:03:29.097202   48088 kubeadm.go:322] [bootstrap-token] Using token: 1l0lv4.q8mu3aeamo8e3253
	I0229 19:03:29.098693   48088 out.go:204]   - Configuring RBAC rules ...
	I0229 19:03:29.098829   48088 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0229 19:03:29.098945   48088 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0229 19:03:29.099166   48088 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0229 19:03:29.099357   48088 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0229 19:03:29.099502   48088 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0229 19:03:29.099613   48088 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0229 19:03:29.099756   48088 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0229 19:03:29.099816   48088 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0229 19:03:29.099874   48088 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0229 19:03:29.099884   48088 kubeadm.go:322] 
	I0229 19:03:29.099961   48088 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0229 19:03:29.099970   48088 kubeadm.go:322] 
	I0229 19:03:29.100060   48088 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0229 19:03:29.100070   48088 kubeadm.go:322] 
	I0229 19:03:29.100100   48088 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0229 19:03:29.100173   48088 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0229 19:03:29.100239   48088 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0229 19:03:29.100252   48088 kubeadm.go:322] 
	I0229 19:03:29.100319   48088 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0229 19:03:29.100329   48088 kubeadm.go:322] 
	I0229 19:03:29.100388   48088 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0229 19:03:29.100398   48088 kubeadm.go:322] 
	I0229 19:03:29.100463   48088 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0229 19:03:29.100559   48088 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0229 19:03:29.100651   48088 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0229 19:03:29.100661   48088 kubeadm.go:322] 
	I0229 19:03:29.100763   48088 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0229 19:03:29.100862   48088 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0229 19:03:29.100877   48088 kubeadm.go:322] 
	I0229 19:03:29.100984   48088 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token 1l0lv4.q8mu3aeamo8e3253 \
	I0229 19:03:29.101114   48088 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:aeecb2c9da879b7c90530698bac219eb9b0e3de8de59b6b65ba867f984e81782 \
	I0229 19:03:29.101143   48088 kubeadm.go:322] 	--control-plane 
	I0229 19:03:29.101152   48088 kubeadm.go:322] 
	I0229 19:03:29.101249   48088 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0229 19:03:29.101258   48088 kubeadm.go:322] 
	I0229 19:03:29.101351   48088 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token 1l0lv4.q8mu3aeamo8e3253 \
	I0229 19:03:29.101473   48088 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:aeecb2c9da879b7c90530698bac219eb9b0e3de8de59b6b65ba867f984e81782 
	I0229 19:03:29.101488   48088 cni.go:84] Creating CNI manager for ""
	I0229 19:03:29.101497   48088 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 19:03:29.103073   48088 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 19:03:29.104219   48088 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 19:03:29.170952   48088 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 19:03:29.239084   48088 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 19:03:29.239154   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:29.239173   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f9e09574fdbf0719156a1e892f7aeb8b71f0cf19 minikube.k8s.io/name=default-k8s-diff-port-153528 minikube.k8s.io/updated_at=2024_02_29T19_03_29_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:25.847505   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:28.346494   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:29.423784   48088 ops.go:34] apiserver oom_adj: -16
	I0229 19:03:29.641150   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:30.141394   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:30.641982   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:31.141220   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:31.642229   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:32.141232   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:32.641372   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:33.141757   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:33.641285   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:34.141462   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:30.346615   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:32.844207   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:34.846669   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:34.641857   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:35.142068   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:35.641289   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:36.142146   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:36.641965   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:37.141335   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:37.641778   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:38.141415   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:38.641267   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:39.141162   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:36.846708   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:39.347339   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:39.642154   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:40.141271   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:40.641433   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:41.141522   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:41.641353   48088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:03:41.787617   48088 kubeadm.go:1088] duration metric: took 12.548525295s to wait for elevateKubeSystemPrivileges.
	I0229 19:03:41.787657   48088 kubeadm.go:406] StartCluster complete in 5m24.60631624s
	I0229 19:03:41.787678   48088 settings.go:142] acquiring lock: {Name:mk2120f70b8c0f8e9d58905a579415af500b3723 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:03:41.787771   48088 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18259-6428/kubeconfig
	I0229 19:03:41.789341   48088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/kubeconfig: {Name:mk7125f243525b7f0feb85371d9c568ed8e0cf7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:03:41.789617   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 19:03:41.789716   48088 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 19:03:41.789815   48088 config.go:182] Loaded profile config "default-k8s-diff-port-153528": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 19:03:41.789835   48088 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-153528"
	I0229 19:03:41.789835   48088 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-153528"
	I0229 19:03:41.789856   48088 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-153528"
	I0229 19:03:41.789821   48088 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-153528"
	I0229 19:03:41.789879   48088 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-153528"
	W0229 19:03:41.789890   48088 addons.go:243] addon storage-provisioner should already be in state true
	I0229 19:03:41.789937   48088 host.go:66] Checking if "default-k8s-diff-port-153528" exists ...
	I0229 19:03:41.789861   48088 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-153528"
	W0229 19:03:41.789963   48088 addons.go:243] addon metrics-server should already be in state true
	I0229 19:03:41.790008   48088 host.go:66] Checking if "default-k8s-diff-port-153528" exists ...
	I0229 19:03:41.790304   48088 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:03:41.790312   48088 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:03:41.790332   48088 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:03:41.790338   48088 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:03:41.790374   48088 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:03:41.790417   48088 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:03:41.806924   48088 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36439
	I0229 19:03:41.807115   48088 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38789
	I0229 19:03:41.807481   48088 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:03:41.807671   48088 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:03:41.808017   48088 main.go:141] libmachine: Using API Version  1
	I0229 19:03:41.808036   48088 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:03:41.808178   48088 main.go:141] libmachine: Using API Version  1
	I0229 19:03:41.808194   48088 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:03:41.808251   48088 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45501
	I0229 19:03:41.808377   48088 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:03:41.808613   48088 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:03:41.808953   48088 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:03:41.808999   48088 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:03:41.809113   48088 main.go:141] libmachine: Using API Version  1
	I0229 19:03:41.809136   48088 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:03:41.809418   48088 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:03:41.809604   48088 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:03:41.809789   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetState
	I0229 19:03:41.810683   48088 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:03:41.810718   48088 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:03:41.813030   48088 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-153528"
	W0229 19:03:41.813045   48088 addons.go:243] addon default-storageclass should already be in state true
	I0229 19:03:41.813066   48088 host.go:66] Checking if "default-k8s-diff-port-153528" exists ...
	I0229 19:03:41.813309   48088 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:03:41.813321   48088 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:03:41.824373   48088 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33803
	I0229 19:03:41.824768   48088 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:03:41.825263   48088 main.go:141] libmachine: Using API Version  1
	I0229 19:03:41.825280   48088 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:03:41.825557   48088 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:03:41.825699   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetState
	I0229 19:03:41.827334   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .DriverName
	I0229 19:03:41.828844   48088 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0229 19:03:41.829931   48088 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0229 19:03:41.829943   48088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0229 19:03:41.829968   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHHostname
	I0229 19:03:41.833079   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 19:03:41.833090   48088 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37709
	I0229 19:03:41.833451   48088 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:03:41.833516   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ec:2b", ip: ""} in network mk-default-k8s-diff-port-153528: {Iface:virbr3 ExpiryTime:2024-02-29 19:58:02 +0000 UTC Type:0 Mac:52:54:00:78:ec:2b Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:default-k8s-diff-port-153528 Clientid:01:52:54:00:78:ec:2b}
	I0229 19:03:41.833527   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined IP address 192.168.39.210 and MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 19:03:41.833694   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHPort
	I0229 19:03:41.833895   48088 main.go:141] libmachine: Using API Version  1
	I0229 19:03:41.833913   48088 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37833
	I0229 19:03:41.833917   48088 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:03:41.833982   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHKeyPath
	I0229 19:03:41.834140   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHUsername
	I0229 19:03:41.834272   48088 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/default-k8s-diff-port-153528/id_rsa Username:docker}
	I0229 19:03:41.834795   48088 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:03:41.835272   48088 main.go:141] libmachine: Using API Version  1
	I0229 19:03:41.835293   48088 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:03:41.835298   48088 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:03:41.835675   48088 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:03:41.835791   48088 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:03:41.835798   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetState
	I0229 19:03:41.835827   48088 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:03:41.837394   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .DriverName
	I0229 19:03:41.839349   48088 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 19:03:41.840971   48088 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 19:03:41.840992   48088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 19:03:41.841008   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHHostname
	I0229 19:03:41.844091   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 19:03:41.844475   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ec:2b", ip: ""} in network mk-default-k8s-diff-port-153528: {Iface:virbr3 ExpiryTime:2024-02-29 19:58:02 +0000 UTC Type:0 Mac:52:54:00:78:ec:2b Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:default-k8s-diff-port-153528 Clientid:01:52:54:00:78:ec:2b}
	I0229 19:03:41.844505   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined IP address 192.168.39.210 and MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 19:03:41.844735   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHPort
	I0229 19:03:41.844954   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHKeyPath
	I0229 19:03:41.845143   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHUsername
	I0229 19:03:41.845300   48088 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/default-k8s-diff-port-153528/id_rsa Username:docker}
	I0229 19:03:41.853524   48088 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45631
	I0229 19:03:41.855329   48088 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:03:41.855788   48088 main.go:141] libmachine: Using API Version  1
	I0229 19:03:41.855809   48088 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:03:41.856135   48088 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:03:41.856317   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetState
	I0229 19:03:41.857882   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .DriverName
	I0229 19:03:41.858179   48088 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 19:03:41.858193   48088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 19:03:41.858214   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHHostname
	I0229 19:03:41.861292   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 19:03:41.861640   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ec:2b", ip: ""} in network mk-default-k8s-diff-port-153528: {Iface:virbr3 ExpiryTime:2024-02-29 19:58:02 +0000 UTC Type:0 Mac:52:54:00:78:ec:2b Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:default-k8s-diff-port-153528 Clientid:01:52:54:00:78:ec:2b}
	I0229 19:03:41.861664   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | domain default-k8s-diff-port-153528 has defined IP address 192.168.39.210 and MAC address 52:54:00:78:ec:2b in network mk-default-k8s-diff-port-153528
	I0229 19:03:41.861899   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHPort
	I0229 19:03:41.862088   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHKeyPath
	I0229 19:03:41.862241   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .GetSSHUsername
	I0229 19:03:41.862413   48088 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/default-k8s-diff-port-153528/id_rsa Username:docker}
	I0229 19:03:42.162741   48088 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0229 19:03:42.162760   48088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0229 19:03:42.164559   48088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 19:03:42.185784   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0229 19:03:42.225413   48088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 19:03:42.283759   48088 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0229 19:03:42.283792   48088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0229 19:03:42.296879   48088 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-153528" context rescaled to 1 replicas
	I0229 19:03:42.296912   48088 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.210 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0229 19:03:42.298687   48088 out.go:177] * Verifying Kubernetes components...
	I0229 19:03:42.300011   48088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 19:03:42.478347   48088 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 19:03:42.478370   48088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0229 19:03:42.626185   48088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 19:03:44.654846   48088 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.469026575s)
	I0229 19:03:44.654876   48088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.429431888s)
	I0229 19:03:44.654891   48088 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0229 19:03:44.654927   48088 main.go:141] libmachine: Making call to close driver server
	I0229 19:03:44.654937   48088 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.354896537s)
	I0229 19:03:44.654987   48088 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-153528" to be "Ready" ...
	I0229 19:03:44.654942   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .Close
	I0229 19:03:44.655090   48088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.490505268s)
	I0229 19:03:44.655115   48088 main.go:141] libmachine: Making call to close driver server
	I0229 19:03:44.655125   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .Close
	I0229 19:03:44.655326   48088 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:03:44.655344   48088 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:03:44.655346   48088 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:03:44.655345   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | Closing plugin on server side
	I0229 19:03:44.655354   48088 main.go:141] libmachine: Making call to close driver server
	I0229 19:03:44.655357   48088 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:03:44.655363   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .Close
	I0229 19:03:44.655370   48088 main.go:141] libmachine: Making call to close driver server
	I0229 19:03:44.655379   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .Close
	I0229 19:03:44.655562   48088 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:03:44.655604   48088 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:03:44.655579   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | Closing plugin on server side
	I0229 19:03:44.655662   48088 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:03:44.655821   48088 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:03:44.655659   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | Closing plugin on server side
	I0229 19:03:44.659331   48088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.033110492s)
	I0229 19:03:44.659381   48088 main.go:141] libmachine: Making call to close driver server
	I0229 19:03:44.659393   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .Close
	I0229 19:03:44.659652   48088 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:03:44.659667   48088 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:03:44.659675   48088 main.go:141] libmachine: Making call to close driver server
	I0229 19:03:44.659683   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .Close
	I0229 19:03:44.659685   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | Closing plugin on server side
	I0229 19:03:44.659902   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | Closing plugin on server side
	I0229 19:03:44.659939   48088 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:03:44.659950   48088 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:03:44.659960   48088 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-153528"
	I0229 19:03:44.683397   48088 node_ready.go:49] node "default-k8s-diff-port-153528" has status "Ready":"True"
	I0229 19:03:44.683417   48088 node_ready.go:38] duration metric: took 28.415374ms waiting for node "default-k8s-diff-port-153528" to be "Ready" ...
	I0229 19:03:44.683427   48088 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 19:03:44.685811   48088 main.go:141] libmachine: Making call to close driver server
	I0229 19:03:44.685831   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) Calling .Close
	I0229 19:03:44.686088   48088 main.go:141] libmachine: (default-k8s-diff-port-153528) DBG | Closing plugin on server side
	I0229 19:03:44.686110   48088 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:03:44.686122   48088 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:03:44.687970   48088 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0229 19:03:41.849469   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:44.345593   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:44.689232   48088 addons.go:505] enable addons completed in 2.899518009s: enabled=[storage-provisioner metrics-server default-storageclass]
	I0229 19:03:44.693381   48088 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-cgvkv" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:44.720914   48088 pod_ready.go:92] pod "coredns-5dd5756b68-cgvkv" in "kube-system" namespace has status "Ready":"True"
	I0229 19:03:44.720942   48088 pod_ready.go:81] duration metric: took 27.53714ms waiting for pod "coredns-5dd5756b68-cgvkv" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:44.720954   48088 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-fmptg" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:44.729596   48088 pod_ready.go:92] pod "coredns-5dd5756b68-fmptg" in "kube-system" namespace has status "Ready":"True"
	I0229 19:03:44.729618   48088 pod_ready.go:81] duration metric: took 8.655818ms waiting for pod "coredns-5dd5756b68-fmptg" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:44.729628   48088 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-153528" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:44.734112   48088 pod_ready.go:92] pod "etcd-default-k8s-diff-port-153528" in "kube-system" namespace has status "Ready":"True"
	I0229 19:03:44.734130   48088 pod_ready.go:81] duration metric: took 4.494255ms waiting for pod "etcd-default-k8s-diff-port-153528" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:44.734137   48088 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-153528" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:44.738843   48088 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-153528" in "kube-system" namespace has status "Ready":"True"
	I0229 19:03:44.738860   48088 pod_ready.go:81] duration metric: took 4.717537ms waiting for pod "kube-apiserver-default-k8s-diff-port-153528" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:44.738868   48088 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-153528" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:45.059153   48088 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-153528" in "kube-system" namespace has status "Ready":"True"
	I0229 19:03:45.059174   48088 pod_ready.go:81] duration metric: took 320.300485ms waiting for pod "kube-controller-manager-default-k8s-diff-port-153528" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:45.059183   48088 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bvrxx" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:45.465590   48088 pod_ready.go:92] pod "kube-proxy-bvrxx" in "kube-system" namespace has status "Ready":"True"
	I0229 19:03:45.465616   48088 pod_ready.go:81] duration metric: took 406.426237ms waiting for pod "kube-proxy-bvrxx" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:45.465630   48088 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-153528" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:45.858390   48088 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-153528" in "kube-system" namespace has status "Ready":"True"
	I0229 19:03:45.858413   48088 pod_ready.go:81] duration metric: took 392.775547ms waiting for pod "kube-scheduler-default-k8s-diff-port-153528" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:45.858426   48088 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace to be "Ready" ...
	I0229 19:03:47.866057   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:46.848336   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:49.344899   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:49.866128   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:51.871764   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:51.346608   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:53.846506   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:58.394324   47515 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.716776929s)
	I0229 19:03:58.394415   47515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 19:03:58.411946   47515 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 19:03:58.422778   47515 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 19:03:58.432981   47515 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 19:03:58.433029   47515 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0229 19:03:58.497643   47515 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.2
	I0229 19:03:58.497784   47515 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 19:03:58.673058   47515 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 19:03:58.673181   47515 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 19:03:58.673291   47515 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 19:03:58.915681   47515 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 19:03:54.366316   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:56.866740   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:58.867746   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:58.917365   47515 out.go:204]   - Generating certificates and keys ...
	I0229 19:03:58.917468   47515 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 19:03:58.917556   47515 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 19:03:58.917657   47515 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 19:03:58.917758   47515 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 19:03:58.917857   47515 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 19:03:58.917933   47515 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 19:03:58.918117   47515 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 19:03:58.918699   47515 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 19:03:58.919679   47515 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 19:03:58.920578   47515 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 19:03:58.921424   47515 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 19:03:58.921738   47515 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 19:03:59.066887   47515 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 19:03:59.215266   47515 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0229 19:03:59.404270   47515 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 19:03:59.514467   47515 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 19:03:59.615483   47515 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 19:03:59.616256   47515 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 19:03:59.619177   47515 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 19:03:55.850264   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:58.346720   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:03:59.620798   47515 out.go:204]   - Booting up control plane ...
	I0229 19:03:59.620910   47515 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 19:03:59.621009   47515 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 19:03:59.621758   47515 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 19:03:59.648331   47515 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 19:03:59.649070   47515 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 19:03:59.649141   47515 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0229 19:03:59.796018   47515 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 19:04:00.868393   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:03.366167   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:00.848016   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:03.347491   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:05.801078   47515 kubeadm.go:322] [apiclient] All control plane components are healthy after 6.003292 seconds
	I0229 19:04:05.820231   47515 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0229 19:04:05.842846   47515 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0229 19:04:06.388308   47515 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0229 19:04:06.388598   47515 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-247197 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0229 19:04:06.905903   47515 kubeadm.go:322] [bootstrap-token] Using token: 42vs85.s8nvx0pxc27k9bgo
	I0229 19:04:06.907650   47515 out.go:204]   - Configuring RBAC rules ...
	I0229 19:04:06.907813   47515 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0229 19:04:06.913716   47515 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0229 19:04:06.925730   47515 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0229 19:04:06.929319   47515 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0229 19:04:06.933110   47515 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0229 19:04:06.938550   47515 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0229 19:04:06.956559   47515 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0229 19:04:07.216913   47515 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0229 19:04:07.320534   47515 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0229 19:04:07.321455   47515 kubeadm.go:322] 
	I0229 19:04:07.321548   47515 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0229 19:04:07.321578   47515 kubeadm.go:322] 
	I0229 19:04:07.321696   47515 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0229 19:04:07.321710   47515 kubeadm.go:322] 
	I0229 19:04:07.321752   47515 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0229 19:04:07.321848   47515 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0229 19:04:07.321914   47515 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0229 19:04:07.321929   47515 kubeadm.go:322] 
	I0229 19:04:07.322021   47515 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0229 19:04:07.322032   47515 kubeadm.go:322] 
	I0229 19:04:07.322099   47515 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0229 19:04:07.322111   47515 kubeadm.go:322] 
	I0229 19:04:07.322182   47515 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0229 19:04:07.322304   47515 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0229 19:04:07.322404   47515 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0229 19:04:07.322416   47515 kubeadm.go:322] 
	I0229 19:04:07.322559   47515 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0229 19:04:07.322679   47515 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0229 19:04:07.322704   47515 kubeadm.go:322] 
	I0229 19:04:07.322808   47515 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 42vs85.s8nvx0pxc27k9bgo \
	I0229 19:04:07.322922   47515 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:aeecb2c9da879b7c90530698bac219eb9b0e3de8de59b6b65ba867f984e81782 \
	I0229 19:04:07.322956   47515 kubeadm.go:322] 	--control-plane 
	I0229 19:04:07.322964   47515 kubeadm.go:322] 
	I0229 19:04:07.323090   47515 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0229 19:04:07.323103   47515 kubeadm.go:322] 
	I0229 19:04:07.323230   47515 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 42vs85.s8nvx0pxc27k9bgo \
	I0229 19:04:07.323408   47515 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:aeecb2c9da879b7c90530698bac219eb9b0e3de8de59b6b65ba867f984e81782 
	I0229 19:04:07.323921   47515 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 19:04:07.323961   47515 cni.go:84] Creating CNI manager for ""
	I0229 19:04:07.323975   47515 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 19:04:07.325925   47515 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 19:04:07.327319   47515 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 19:04:07.387016   47515 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 19:04:07.434438   47515 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 19:04:07.434538   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:07.434554   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f9e09574fdbf0719156a1e892f7aeb8b71f0cf19 minikube.k8s.io/name=no-preload-247197 minikube.k8s.io/updated_at=2024_02_29T19_04_07_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:07.752182   47515 ops.go:34] apiserver oom_adj: -16
	I0229 19:04:07.752320   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:08.955017   47919 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 19:04:08.955134   47919 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 19:04:08.956493   47919 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 19:04:08.956586   47919 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 19:04:08.956684   47919 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 19:04:08.956809   47919 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 19:04:08.956955   47919 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 19:04:08.957116   47919 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 19:04:08.957253   47919 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 19:04:08.957304   47919 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 19:04:08.957375   47919 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 19:04:08.959231   47919 out.go:204]   - Generating certificates and keys ...
	I0229 19:04:08.959317   47919 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 19:04:08.959429   47919 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 19:04:08.959550   47919 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 19:04:08.959637   47919 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 19:04:08.959745   47919 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 19:04:08.959792   47919 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 19:04:08.959851   47919 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 19:04:08.959934   47919 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 19:04:08.960022   47919 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 19:04:08.960099   47919 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 19:04:08.960159   47919 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 19:04:08.960227   47919 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 19:04:08.960303   47919 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 19:04:08.960349   47919 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 19:04:08.960403   47919 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 19:04:08.960462   47919 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 19:04:08.960540   47919 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 19:04:05.369713   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:07.871542   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:08.962078   47919 out.go:204]   - Booting up control plane ...
	I0229 19:04:08.962181   47919 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 19:04:08.962279   47919 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 19:04:08.962361   47919 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 19:04:08.962470   47919 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 19:04:08.962646   47919 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 19:04:08.962689   47919 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 19:04:08.962777   47919 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 19:04:08.962968   47919 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 19:04:08.963056   47919 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 19:04:08.963331   47919 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 19:04:08.963436   47919 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 19:04:08.963646   47919 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 19:04:08.963761   47919 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 19:04:08.963949   47919 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 19:04:08.964053   47919 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 19:04:08.964273   47919 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 19:04:08.964281   47919 kubeadm.go:322] 
	I0229 19:04:08.964313   47919 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 19:04:08.964351   47919 kubeadm.go:322] 	timed out waiting for the condition
	I0229 19:04:08.964358   47919 kubeadm.go:322] 
	I0229 19:04:08.964385   47919 kubeadm.go:322] This error is likely caused by:
	I0229 19:04:08.964441   47919 kubeadm.go:322] 	- The kubelet is not running
	I0229 19:04:08.964547   47919 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 19:04:08.964560   47919 kubeadm.go:322] 
	I0229 19:04:08.964684   47919 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 19:04:08.964734   47919 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 19:04:08.964780   47919 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 19:04:08.964789   47919 kubeadm.go:322] 
	I0229 19:04:08.964922   47919 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 19:04:08.965053   47919 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 19:04:08.965180   47919 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 19:04:08.965255   47919 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 19:04:08.965342   47919 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 19:04:08.965438   47919 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	W0229 19:04:08.965475   47919 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0229 19:04:08.965520   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0229 19:04:09.441915   47919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 19:04:09.459807   47919 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 19:04:09.471061   47919 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 19:04:09.471099   47919 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 19:04:09.532830   47919 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 19:04:09.532979   47919 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 19:04:09.673720   47919 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 19:04:09.673884   47919 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 19:04:09.674071   47919 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 19:04:09.905201   47919 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 19:04:09.906612   47919 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 19:04:09.915393   47919 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 19:04:10.035443   47919 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 19:04:05.845532   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:07.846899   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:09.847708   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:10.037103   47919 out.go:204]   - Generating certificates and keys ...
	I0229 19:04:10.037203   47919 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 19:04:10.037335   47919 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 19:04:10.037453   47919 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 19:04:10.037558   47919 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 19:04:10.037689   47919 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 19:04:10.037832   47919 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 19:04:10.038465   47919 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 19:04:10.038932   47919 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 19:04:10.039471   47919 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 19:04:10.039874   47919 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 19:04:10.039961   47919 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 19:04:10.040045   47919 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 19:04:10.157741   47919 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 19:04:10.426271   47919 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 19:04:10.528768   47919 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 19:04:10.595099   47919 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 19:04:10.596020   47919 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 19:04:08.252779   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:08.753332   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:09.252867   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:09.752631   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:10.253281   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:10.753138   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:11.253104   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:11.752894   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:12.253271   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:12.753046   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:10.367912   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:12.870689   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:10.597781   47919 out.go:204]   - Booting up control plane ...
	I0229 19:04:10.597872   47919 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 19:04:10.602307   47919 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 19:04:10.603371   47919 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 19:04:10.604660   47919 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 19:04:10.607876   47919 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 19:04:12.346304   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:14.346555   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:13.252668   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:13.752660   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:14.252803   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:14.752360   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:15.252343   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:15.752568   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:16.252484   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:16.752977   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:17.253148   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:17.753112   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:15.366706   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:17.867839   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:18.253109   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:18.753221   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:19.253179   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:19.752851   47515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:04:19.875013   47515 kubeadm.go:1088] duration metric: took 12.44055176s to wait for elevateKubeSystemPrivileges.
	I0229 19:04:19.875056   47515 kubeadm.go:406] StartCluster complete in 5m26.137187745s
	I0229 19:04:19.875078   47515 settings.go:142] acquiring lock: {Name:mk2120f70b8c0f8e9d58905a579415af500b3723 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:04:19.875156   47515 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18259-6428/kubeconfig
	I0229 19:04:19.876716   47515 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/kubeconfig: {Name:mk7125f243525b7f0feb85371d9c568ed8e0cf7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:04:19.876957   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 19:04:19.877116   47515 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 19:04:19.877196   47515 addons.go:69] Setting storage-provisioner=true in profile "no-preload-247197"
	I0229 19:04:19.877207   47515 config.go:182] Loaded profile config "no-preload-247197": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0229 19:04:19.877222   47515 addons.go:69] Setting metrics-server=true in profile "no-preload-247197"
	I0229 19:04:19.877208   47515 addons.go:69] Setting default-storageclass=true in profile "no-preload-247197"
	I0229 19:04:19.877269   47515 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-247197"
	I0229 19:04:19.877213   47515 addons.go:234] Setting addon storage-provisioner=true in "no-preload-247197"
	W0229 19:04:19.877372   47515 addons.go:243] addon storage-provisioner should already be in state true
	I0229 19:04:19.877412   47515 host.go:66] Checking if "no-preload-247197" exists ...
	I0229 19:04:19.877244   47515 addons.go:234] Setting addon metrics-server=true in "no-preload-247197"
	W0229 19:04:19.877465   47515 addons.go:243] addon metrics-server should already be in state true
	I0229 19:04:19.877519   47515 host.go:66] Checking if "no-preload-247197" exists ...
	I0229 19:04:19.877697   47515 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:04:19.877734   47515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:04:19.877787   47515 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:04:19.877822   47515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:04:19.877875   47515 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:04:19.877905   47515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:04:19.895578   47515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37865
	I0229 19:04:19.896005   47515 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:04:19.896491   47515 main.go:141] libmachine: Using API Version  1
	I0229 19:04:19.896516   47515 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:04:19.897033   47515 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:04:19.897628   47515 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:04:19.897677   47515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:04:19.897705   47515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35601
	I0229 19:04:19.897711   47515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37627
	I0229 19:04:19.898072   47515 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:04:19.898171   47515 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:04:19.898512   47515 main.go:141] libmachine: Using API Version  1
	I0229 19:04:19.898533   47515 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:04:19.898653   47515 main.go:141] libmachine: Using API Version  1
	I0229 19:04:19.898674   47515 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:04:19.898854   47515 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:04:19.899002   47515 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:04:19.899159   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetState
	I0229 19:04:19.899386   47515 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:04:19.899433   47515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:04:19.902917   47515 addons.go:234] Setting addon default-storageclass=true in "no-preload-247197"
	W0229 19:04:19.902937   47515 addons.go:243] addon default-storageclass should already be in state true
	I0229 19:04:19.902965   47515 host.go:66] Checking if "no-preload-247197" exists ...
	I0229 19:04:19.903374   47515 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:04:19.903492   47515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:04:19.915592   47515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45771
	I0229 19:04:19.916152   47515 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:04:19.916347   47515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46249
	I0229 19:04:19.916677   47515 main.go:141] libmachine: Using API Version  1
	I0229 19:04:19.916694   47515 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:04:19.916799   47515 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:04:19.917168   47515 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:04:19.917302   47515 main.go:141] libmachine: Using API Version  1
	I0229 19:04:19.917314   47515 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:04:19.917505   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetState
	I0229 19:04:19.918075   47515 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:04:19.918253   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetState
	I0229 19:04:19.918351   47515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39265
	I0229 19:04:19.918773   47515 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:04:19.919153   47515 main.go:141] libmachine: Using API Version  1
	I0229 19:04:19.919170   47515 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:04:19.919631   47515 main.go:141] libmachine: (no-preload-247197) Calling .DriverName
	I0229 19:04:19.919999   47515 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:04:19.922165   47515 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0229 19:04:19.920215   47515 main.go:141] libmachine: (no-preload-247197) Calling .DriverName
	I0229 19:04:19.920473   47515 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:04:19.923441   47515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:04:19.923454   47515 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0229 19:04:19.923466   47515 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0229 19:04:19.923481   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHHostname
	I0229 19:04:19.924990   47515 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 19:04:16.845870   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:18.845928   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:19.926366   47515 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 19:04:19.926372   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 19:04:19.926384   47515 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 19:04:19.926402   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHHostname
	I0229 19:04:19.926728   47515 main.go:141] libmachine: (no-preload-247197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2f:53", ip: ""} in network mk-no-preload-247197: {Iface:virbr2 ExpiryTime:2024-02-29 19:58:22 +0000 UTC Type:0 Mac:52:54:00:2c:2f:53 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:no-preload-247197 Clientid:01:52:54:00:2c:2f:53}
	I0229 19:04:19.926752   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined IP address 192.168.50.72 and MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 19:04:19.926908   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHPort
	I0229 19:04:19.927072   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHKeyPath
	I0229 19:04:19.927216   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHUsername
	I0229 19:04:19.927357   47515 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/no-preload-247197/id_rsa Username:docker}
	I0229 19:04:19.929366   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 19:04:19.929709   47515 main.go:141] libmachine: (no-preload-247197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2f:53", ip: ""} in network mk-no-preload-247197: {Iface:virbr2 ExpiryTime:2024-02-29 19:58:22 +0000 UTC Type:0 Mac:52:54:00:2c:2f:53 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:no-preload-247197 Clientid:01:52:54:00:2c:2f:53}
	I0229 19:04:19.929728   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined IP address 192.168.50.72 and MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 19:04:19.929855   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHPort
	I0229 19:04:19.930000   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHKeyPath
	I0229 19:04:19.930090   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHUsername
	I0229 19:04:19.930171   47515 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/no-preload-247197/id_rsa Username:docker}
	I0229 19:04:19.940292   47515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45171
	I0229 19:04:19.940856   47515 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:04:19.941327   47515 main.go:141] libmachine: Using API Version  1
	I0229 19:04:19.941354   47515 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:04:19.941647   47515 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:04:19.941817   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetState
	I0229 19:04:19.943378   47515 main.go:141] libmachine: (no-preload-247197) Calling .DriverName
	I0229 19:04:19.943608   47515 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 19:04:19.943624   47515 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 19:04:19.943640   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHHostname
	I0229 19:04:19.946715   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 19:04:19.947112   47515 main.go:141] libmachine: (no-preload-247197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2f:53", ip: ""} in network mk-no-preload-247197: {Iface:virbr2 ExpiryTime:2024-02-29 19:58:22 +0000 UTC Type:0 Mac:52:54:00:2c:2f:53 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:no-preload-247197 Clientid:01:52:54:00:2c:2f:53}
	I0229 19:04:19.947132   47515 main.go:141] libmachine: (no-preload-247197) DBG | domain no-preload-247197 has defined IP address 192.168.50.72 and MAC address 52:54:00:2c:2f:53 in network mk-no-preload-247197
	I0229 19:04:19.947413   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHPort
	I0229 19:04:19.947546   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHKeyPath
	I0229 19:04:19.947672   47515 main.go:141] libmachine: (no-preload-247197) Calling .GetSSHUsername
	I0229 19:04:19.947795   47515 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/no-preload-247197/id_rsa Username:docker}
	I0229 19:04:20.159078   47515 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 19:04:20.246059   47515 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0229 19:04:20.246085   47515 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0229 19:04:20.338238   47515 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0229 19:04:20.338261   47515 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0229 19:04:20.365954   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0229 19:04:20.383186   47515 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-247197" context rescaled to 1 replicas
	I0229 19:04:20.383231   47515 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.72 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0229 19:04:20.385225   47515 out.go:177] * Verifying Kubernetes components...
	I0229 19:04:20.386616   47515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 19:04:20.395136   47515 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 19:04:20.442555   47515 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 19:04:20.442575   47515 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0229 19:04:20.584731   47515 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 19:04:21.931286   47515 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.772173305s)
	I0229 19:04:21.931338   47515 main.go:141] libmachine: Making call to close driver server
	I0229 19:04:21.931350   47515 main.go:141] libmachine: (no-preload-247197) Calling .Close
	I0229 19:04:21.931346   47515 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.565356284s)
	I0229 19:04:21.931374   47515 start.go:929] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0229 19:04:21.931413   47515 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.544778173s)
	I0229 19:04:21.931439   47515 node_ready.go:35] waiting up to 6m0s for node "no-preload-247197" to be "Ready" ...
	I0229 19:04:21.931456   47515 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.536286802s)
	I0229 19:04:21.931484   47515 main.go:141] libmachine: Making call to close driver server
	I0229 19:04:21.931493   47515 main.go:141] libmachine: (no-preload-247197) Calling .Close
	I0229 19:04:21.932214   47515 main.go:141] libmachine: (no-preload-247197) DBG | Closing plugin on server side
	I0229 19:04:21.932216   47515 main.go:141] libmachine: (no-preload-247197) DBG | Closing plugin on server side
	I0229 19:04:21.932230   47515 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:04:21.932243   47515 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:04:21.932252   47515 main.go:141] libmachine: Making call to close driver server
	I0229 19:04:21.932269   47515 main.go:141] libmachine: (no-preload-247197) Calling .Close
	I0229 19:04:21.932251   47515 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:04:21.932321   47515 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:04:21.932330   47515 main.go:141] libmachine: Making call to close driver server
	I0229 19:04:21.932340   47515 main.go:141] libmachine: (no-preload-247197) Calling .Close
	I0229 19:04:21.932458   47515 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:04:21.932470   47515 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:04:21.932629   47515 main.go:141] libmachine: (no-preload-247197) DBG | Closing plugin on server side
	I0229 19:04:21.932649   47515 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:04:21.932656   47515 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:04:21.949312   47515 main.go:141] libmachine: Making call to close driver server
	I0229 19:04:21.949338   47515 main.go:141] libmachine: (no-preload-247197) Calling .Close
	I0229 19:04:21.949619   47515 main.go:141] libmachine: (no-preload-247197) DBG | Closing plugin on server side
	I0229 19:04:21.949662   47515 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:04:21.949675   47515 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:04:21.951119   47515 node_ready.go:49] node "no-preload-247197" has status "Ready":"True"
	I0229 19:04:21.951138   47515 node_ready.go:38] duration metric: took 19.687343ms waiting for node "no-preload-247197" to be "Ready" ...
	I0229 19:04:21.951148   47515 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 19:04:21.965909   47515 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-4k6hl" in "kube-system" namespace to be "Ready" ...
	I0229 19:04:21.979164   47515 pod_ready.go:92] pod "coredns-76f75df574-4k6hl" in "kube-system" namespace has status "Ready":"True"
	I0229 19:04:21.979185   47515 pod_ready.go:81] duration metric: took 13.25328ms waiting for pod "coredns-76f75df574-4k6hl" in "kube-system" namespace to be "Ready" ...
	I0229 19:04:21.979197   47515 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-9z6k5" in "kube-system" namespace to be "Ready" ...
	I0229 19:04:21.987905   47515 pod_ready.go:92] pod "coredns-76f75df574-9z6k5" in "kube-system" namespace has status "Ready":"True"
	I0229 19:04:21.987924   47515 pod_ready.go:81] duration metric: took 8.719445ms waiting for pod "coredns-76f75df574-9z6k5" in "kube-system" namespace to be "Ready" ...
	I0229 19:04:21.987935   47515 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-247197" in "kube-system" namespace to be "Ready" ...
	I0229 19:04:21.992310   47515 pod_ready.go:92] pod "etcd-no-preload-247197" in "kube-system" namespace has status "Ready":"True"
	I0229 19:04:21.992328   47515 pod_ready.go:81] duration metric: took 4.385196ms waiting for pod "etcd-no-preload-247197" in "kube-system" namespace to be "Ready" ...
	I0229 19:04:21.992339   47515 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-247197" in "kube-system" namespace to be "Ready" ...
	I0229 19:04:21.999702   47515 pod_ready.go:92] pod "kube-apiserver-no-preload-247197" in "kube-system" namespace has status "Ready":"True"
	I0229 19:04:21.999722   47515 pod_ready.go:81] duration metric: took 7.374368ms waiting for pod "kube-apiserver-no-preload-247197" in "kube-system" namespace to be "Ready" ...
	I0229 19:04:21.999733   47515 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-247197" in "kube-system" namespace to be "Ready" ...
	I0229 19:04:22.010201   47515 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.425431238s)
	I0229 19:04:22.010236   47515 main.go:141] libmachine: Making call to close driver server
	I0229 19:04:22.010249   47515 main.go:141] libmachine: (no-preload-247197) Calling .Close
	I0229 19:04:22.010564   47515 main.go:141] libmachine: (no-preload-247197) DBG | Closing plugin on server side
	I0229 19:04:22.010605   47515 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:04:22.010614   47515 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:04:22.010635   47515 main.go:141] libmachine: Making call to close driver server
	I0229 19:04:22.010644   47515 main.go:141] libmachine: (no-preload-247197) Calling .Close
	I0229 19:04:22.010882   47515 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:04:22.010900   47515 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:04:22.010910   47515 main.go:141] libmachine: (no-preload-247197) DBG | Closing plugin on server side
	I0229 19:04:22.010910   47515 addons.go:470] Verifying addon metrics-server=true in "no-preload-247197"
	I0229 19:04:22.013314   47515 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0229 19:04:22.014366   47515 addons.go:505] enable addons completed in 2.137254118s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0229 19:04:22.338772   47515 pod_ready.go:92] pod "kube-controller-manager-no-preload-247197" in "kube-system" namespace has status "Ready":"True"
	I0229 19:04:22.338799   47515 pod_ready.go:81] duration metric: took 339.058404ms waiting for pod "kube-controller-manager-no-preload-247197" in "kube-system" namespace to be "Ready" ...
	I0229 19:04:22.338812   47515 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vvkjv" in "kube-system" namespace to be "Ready" ...
	I0229 19:04:22.737254   47515 pod_ready.go:92] pod "kube-proxy-vvkjv" in "kube-system" namespace has status "Ready":"True"
	I0229 19:04:22.737280   47515 pod_ready.go:81] duration metric: took 398.461074ms waiting for pod "kube-proxy-vvkjv" in "kube-system" namespace to be "Ready" ...
	I0229 19:04:22.737294   47515 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-247197" in "kube-system" namespace to be "Ready" ...
	I0229 19:04:20.370710   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:22.866800   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:20.846680   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:23.345140   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:23.135406   47515 pod_ready.go:92] pod "kube-scheduler-no-preload-247197" in "kube-system" namespace has status "Ready":"True"
	I0229 19:04:23.135428   47515 pod_ready.go:81] duration metric: took 398.125345ms waiting for pod "kube-scheduler-no-preload-247197" in "kube-system" namespace to be "Ready" ...
	I0229 19:04:23.135440   47515 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace to be "Ready" ...
	I0229 19:04:25.142619   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:27.143696   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:25.367175   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:27.380854   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:25.346266   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:27.844825   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:29.846222   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:29.642557   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:32.143195   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:29.866361   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:32.365864   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:32.344240   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:34.345406   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:34.642612   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:36.642921   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:34.366701   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:36.865897   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:38.866354   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:36.845225   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:39.344488   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:39.142773   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:41.643462   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:40.866485   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:43.367569   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:41.345439   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:43.346065   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:44.142927   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:46.642548   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:45.369460   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:47.867209   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:45.845033   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:47.845603   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:48.643538   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:51.143346   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:50.365414   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:52.366281   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:50.609556   47919 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 19:04:50.610341   47919 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 19:04:50.610592   47919 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 19:04:50.347163   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:52.846321   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:54.847146   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:53.643605   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:55.644824   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:54.866162   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:57.366119   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:55.610941   47919 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 19:04:55.611235   47919 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 19:04:57.345852   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:59.846768   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:58.141799   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:00.142827   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:02.642593   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:04:59.867791   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:02.366238   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:02.345863   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:04.844340   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:04.643708   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:07.142551   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:04.367016   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:06.866170   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:08.869317   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:05.611726   47919 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 19:05:05.611996   47919 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 19:05:06.846686   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:08.846956   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:09.143595   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:11.143779   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:11.367337   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:13.865929   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:11.345732   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:13.346279   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:13.644332   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:16.143576   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:15.866653   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:18.366706   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:15.844887   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:17.846717   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:18.642599   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:20.642837   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:22.643895   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:20.368483   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:22.866758   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:20.346170   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:22.845477   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:25.142628   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:27.643975   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:25.366726   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:27.866780   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:25.612622   47919 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 19:05:25.612856   47919 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 19:05:25.346171   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:27.346624   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:29.844724   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:30.142942   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:32.143445   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:30.367152   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:32.865657   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:31.845835   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:34.347482   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:34.642780   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:36.642919   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:34.870444   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:37.367617   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:36.844507   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:38.845472   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:38.643505   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:41.142928   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:39.865207   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:41.867210   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:41.344604   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:43.347346   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:43.143348   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:45.143659   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:47.643054   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:44.366192   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:46.368043   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:48.867455   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:45.844395   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:47.845753   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:50.143481   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:52.643947   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:51.365758   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:53.866493   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:50.344819   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:52.346315   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:54.845777   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:55.145751   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:57.644326   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:55.866532   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:57.866831   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:56.845928   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:59.345840   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:00.142068   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:02.142779   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:05:59.870256   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:02.365280   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:01.845248   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:04.347842   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:05.613204   47919 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 19:06:05.613467   47919 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 19:06:05.613495   47919 kubeadm.go:322] 
	I0229 19:06:05.613547   47919 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 19:06:05.613598   47919 kubeadm.go:322] 	timed out waiting for the condition
	I0229 19:06:05.613608   47919 kubeadm.go:322] 
	I0229 19:06:05.613653   47919 kubeadm.go:322] This error is likely caused by:
	I0229 19:06:05.613694   47919 kubeadm.go:322] 	- The kubelet is not running
	I0229 19:06:05.613814   47919 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 19:06:05.613823   47919 kubeadm.go:322] 
	I0229 19:06:05.613911   47919 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 19:06:05.613941   47919 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 19:06:05.613974   47919 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 19:06:05.613980   47919 kubeadm.go:322] 
	I0229 19:06:05.614107   47919 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 19:06:05.614240   47919 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 19:06:05.614361   47919 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 19:06:05.614432   47919 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 19:06:05.614533   47919 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 19:06:05.614577   47919 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0229 19:06:05.615575   47919 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 19:06:05.615689   47919 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 19:06:05.615765   47919 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 19:06:05.615822   47919 kubeadm.go:406] StartCluster complete in 8m8.067253054s
	I0229 19:06:05.615873   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:06:05.615920   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:06:05.671959   47919 cri.go:89] found id: ""
	I0229 19:06:05.671998   47919 logs.go:276] 0 containers: []
	W0229 19:06:05.672018   47919 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:06:05.672025   47919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:06:05.672075   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:06:05.715832   47919 cri.go:89] found id: ""
	I0229 19:06:05.715853   47919 logs.go:276] 0 containers: []
	W0229 19:06:05.715860   47919 logs.go:278] No container was found matching "etcd"
	I0229 19:06:05.715866   47919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:06:05.715911   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:06:05.755305   47919 cri.go:89] found id: ""
	I0229 19:06:05.755334   47919 logs.go:276] 0 containers: []
	W0229 19:06:05.755345   47919 logs.go:278] No container was found matching "coredns"
	I0229 19:06:05.755351   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:06:05.755409   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:06:05.807907   47919 cri.go:89] found id: ""
	I0229 19:06:05.807938   47919 logs.go:276] 0 containers: []
	W0229 19:06:05.807950   47919 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:06:05.807957   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:06:05.808015   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:06:05.892777   47919 cri.go:89] found id: ""
	I0229 19:06:05.892805   47919 logs.go:276] 0 containers: []
	W0229 19:06:05.892813   47919 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:06:05.892818   47919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:06:05.892877   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:06:05.931488   47919 cri.go:89] found id: ""
	I0229 19:06:05.931516   47919 logs.go:276] 0 containers: []
	W0229 19:06:05.931527   47919 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:06:05.931534   47919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:06:05.931578   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:06:05.971989   47919 cri.go:89] found id: ""
	I0229 19:06:05.972018   47919 logs.go:276] 0 containers: []
	W0229 19:06:05.972030   47919 logs.go:278] No container was found matching "kindnet"
	I0229 19:06:05.972037   47919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 19:06:05.972112   47919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 19:06:06.008174   47919 cri.go:89] found id: ""
	I0229 19:06:06.008198   47919 logs.go:276] 0 containers: []
	W0229 19:06:06.008208   47919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:06:06.008224   47919 logs.go:123] Gathering logs for dmesg ...
	I0229 19:06:06.008241   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:06:06.024924   47919 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:06:06.024953   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:06:06.111879   47919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:06:06.111904   47919 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:06:06.111918   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:06:06.221563   47919 logs.go:123] Gathering logs for container status ...
	I0229 19:06:06.221593   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:06:06.266861   47919 logs.go:123] Gathering logs for kubelet ...
	I0229 19:06:06.266897   47919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 19:06:06.314923   47919 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0229 19:06:06.314971   47919 out.go:239] * 
	W0229 19:06:06.315043   47919 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 19:06:06.315065   47919 out.go:239] * 
	W0229 19:06:06.315824   47919 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 19:06:06.318988   47919 out.go:177] 
	W0229 19:06:06.320200   47919 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 19:06:06.320245   47919 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0229 19:06:06.320270   47919 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0229 19:06:06.321598   47919 out.go:177] 
	I0229 19:06:04.143707   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:06.145980   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:04.366140   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:06.366873   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:08.366955   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:06.852698   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:09.348579   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:08.643671   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:11.143678   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:10.865166   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:12.866971   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:11.845538   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:14.346445   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:13.642537   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:15.643262   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:17.647209   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:15.366149   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:17.367209   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:16.845485   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:18.845671   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:19.647627   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:22.145622   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:19.866267   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:21.866857   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:20.845841   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:23.349149   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:24.646242   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:27.143078   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:24.368344   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:26.867329   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:25.846273   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:28.346226   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:29.642886   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:31.646657   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:29.365191   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:31.366142   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:33.865692   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:30.845019   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:32.845500   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:34.142811   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:36.144736   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:35.870114   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:38.365999   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:35.347102   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:37.347579   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:39.845962   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:38.642930   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:40.642989   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:42.645337   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:40.366651   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:42.865651   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:41.846699   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:44.348062   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:45.145291   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:47.643786   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:44.866389   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:47.365775   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:46.844303   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:48.845366   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:50.143250   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:52.642758   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:49.366973   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:51.865400   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:53.868123   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:51.345427   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:53.346292   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:54.643044   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:56.643641   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:56.366088   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:58.865505   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:55.845353   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:58.345421   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:06:58.644239   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:01.142462   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:01.374753   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:03.866228   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:00.345809   47608 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:01.845528   47608 pod_ready.go:81] duration metric: took 4m0.007876165s waiting for pod "metrics-server-57f55c9bc5-r66xw" in "kube-system" namespace to be "Ready" ...
	E0229 19:07:01.845551   47608 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0229 19:07:01.845562   47608 pod_ready.go:38] duration metric: took 4m0.790976213s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 19:07:01.845581   47608 api_server.go:52] waiting for apiserver process to appear ...
	I0229 19:07:01.845611   47608 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:07:01.845671   47608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:07:01.901601   47608 cri.go:89] found id: "18f508cd43779fa9b03e93445f4d03abfe5f3e6291dc05cf116f16714d04ec96"
	I0229 19:07:01.901625   47608 cri.go:89] found id: ""
	I0229 19:07:01.901636   47608 logs.go:276] 1 containers: [18f508cd43779fa9b03e93445f4d03abfe5f3e6291dc05cf116f16714d04ec96]
	I0229 19:07:01.901693   47608 ssh_runner.go:195] Run: which crictl
	I0229 19:07:01.906698   47608 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:07:01.906771   47608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:07:01.947360   47608 cri.go:89] found id: "795516eef7b679f1eefaf293eab6b1683326e3fcb3f4e5f848194e4a7ab1566e"
	I0229 19:07:01.947383   47608 cri.go:89] found id: ""
	I0229 19:07:01.947395   47608 logs.go:276] 1 containers: [795516eef7b679f1eefaf293eab6b1683326e3fcb3f4e5f848194e4a7ab1566e]
	I0229 19:07:01.947453   47608 ssh_runner.go:195] Run: which crictl
	I0229 19:07:01.952251   47608 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:07:01.952314   47608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:07:01.996254   47608 cri.go:89] found id: "7220454898e1219d80ac33161ddc4116866d5cf1d1382767c1bf456cfa6b3c72"
	I0229 19:07:01.996279   47608 cri.go:89] found id: ""
	I0229 19:07:01.996289   47608 logs.go:276] 1 containers: [7220454898e1219d80ac33161ddc4116866d5cf1d1382767c1bf456cfa6b3c72]
	I0229 19:07:01.996346   47608 ssh_runner.go:195] Run: which crictl
	I0229 19:07:02.001158   47608 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:07:02.001229   47608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:07:02.039559   47608 cri.go:89] found id: "f1accc151694b40814505327a0743f46796dacf516f08043f58b1c42147e25fe"
	I0229 19:07:02.039583   47608 cri.go:89] found id: ""
	I0229 19:07:02.039593   47608 logs.go:276] 1 containers: [f1accc151694b40814505327a0743f46796dacf516f08043f58b1c42147e25fe]
	I0229 19:07:02.039653   47608 ssh_runner.go:195] Run: which crictl
	I0229 19:07:02.045320   47608 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:07:02.045439   47608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:07:02.091908   47608 cri.go:89] found id: "3327a9756b71a79f82db55da77bd2297066ea76c84094764ff2fcbd04e4f528d"
	I0229 19:07:02.091932   47608 cri.go:89] found id: ""
	I0229 19:07:02.091941   47608 logs.go:276] 1 containers: [3327a9756b71a79f82db55da77bd2297066ea76c84094764ff2fcbd04e4f528d]
	I0229 19:07:02.092002   47608 ssh_runner.go:195] Run: which crictl
	I0229 19:07:02.097461   47608 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:07:02.097533   47608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:07:02.142993   47608 cri.go:89] found id: "9099ab49263e57e41e6b7b28a87e810a951b495e3d0c879bf7ac045d0d2c2bd0"
	I0229 19:07:02.143017   47608 cri.go:89] found id: ""
	I0229 19:07:02.143043   47608 logs.go:276] 1 containers: [9099ab49263e57e41e6b7b28a87e810a951b495e3d0c879bf7ac045d0d2c2bd0]
	I0229 19:07:02.143114   47608 ssh_runner.go:195] Run: which crictl
	I0229 19:07:02.148395   47608 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:07:02.148469   47608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:07:02.189479   47608 cri.go:89] found id: ""
	I0229 19:07:02.189500   47608 logs.go:276] 0 containers: []
	W0229 19:07:02.189508   47608 logs.go:278] No container was found matching "kindnet"
	I0229 19:07:02.189513   47608 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 19:07:02.189567   47608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 19:07:02.237218   47608 cri.go:89] found id: "6d4d0c25cc63929278e7e44eb4cd9bc93f376b8bb76a05b78a29d9d9b9794ada"
	I0229 19:07:02.237238   47608 cri.go:89] found id: ""
	I0229 19:07:02.237246   47608 logs.go:276] 1 containers: [6d4d0c25cc63929278e7e44eb4cd9bc93f376b8bb76a05b78a29d9d9b9794ada]
	I0229 19:07:02.237299   47608 ssh_runner.go:195] Run: which crictl
	I0229 19:07:02.242232   47608 logs.go:123] Gathering logs for dmesg ...
	I0229 19:07:02.242256   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:07:02.258190   47608 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:07:02.258213   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 19:07:02.401759   47608 logs.go:123] Gathering logs for etcd [795516eef7b679f1eefaf293eab6b1683326e3fcb3f4e5f848194e4a7ab1566e] ...
	I0229 19:07:02.401786   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 795516eef7b679f1eefaf293eab6b1683326e3fcb3f4e5f848194e4a7ab1566e"
	I0229 19:07:02.455230   47608 logs.go:123] Gathering logs for kube-controller-manager [9099ab49263e57e41e6b7b28a87e810a951b495e3d0c879bf7ac045d0d2c2bd0] ...
	I0229 19:07:02.455256   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9099ab49263e57e41e6b7b28a87e810a951b495e3d0c879bf7ac045d0d2c2bd0"
	I0229 19:07:02.507842   47608 logs.go:123] Gathering logs for container status ...
	I0229 19:07:02.507870   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:07:02.562721   47608 logs.go:123] Gathering logs for kubelet ...
	I0229 19:07:02.562747   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:07:02.655664   47608 logs.go:123] Gathering logs for kube-apiserver [18f508cd43779fa9b03e93445f4d03abfe5f3e6291dc05cf116f16714d04ec96] ...
	I0229 19:07:02.655696   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18f508cd43779fa9b03e93445f4d03abfe5f3e6291dc05cf116f16714d04ec96"
	I0229 19:07:02.711422   47608 logs.go:123] Gathering logs for coredns [7220454898e1219d80ac33161ddc4116866d5cf1d1382767c1bf456cfa6b3c72] ...
	I0229 19:07:02.711450   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7220454898e1219d80ac33161ddc4116866d5cf1d1382767c1bf456cfa6b3c72"
	I0229 19:07:02.763124   47608 logs.go:123] Gathering logs for kube-scheduler [f1accc151694b40814505327a0743f46796dacf516f08043f58b1c42147e25fe] ...
	I0229 19:07:02.763151   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1accc151694b40814505327a0743f46796dacf516f08043f58b1c42147e25fe"
	I0229 19:07:02.812093   47608 logs.go:123] Gathering logs for kube-proxy [3327a9756b71a79f82db55da77bd2297066ea76c84094764ff2fcbd04e4f528d] ...
	I0229 19:07:02.812126   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3327a9756b71a79f82db55da77bd2297066ea76c84094764ff2fcbd04e4f528d"
	I0229 19:07:02.863781   47608 logs.go:123] Gathering logs for storage-provisioner [6d4d0c25cc63929278e7e44eb4cd9bc93f376b8bb76a05b78a29d9d9b9794ada] ...
	I0229 19:07:02.863810   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d4d0c25cc63929278e7e44eb4cd9bc93f376b8bb76a05b78a29d9d9b9794ada"
	I0229 19:07:02.909931   47608 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:07:02.909956   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:07:03.148571   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:05.642292   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:07.646950   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:05.866773   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:08.364842   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:05.846592   47608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:07:05.868139   47608 api_server.go:72] duration metric: took 4m6.97199894s to wait for apiserver process to appear ...
	I0229 19:07:05.868162   47608 api_server.go:88] waiting for apiserver healthz status ...
	I0229 19:07:05.868198   47608 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:07:05.868254   47608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:07:05.911179   47608 cri.go:89] found id: "18f508cd43779fa9b03e93445f4d03abfe5f3e6291dc05cf116f16714d04ec96"
	I0229 19:07:05.911204   47608 cri.go:89] found id: ""
	I0229 19:07:05.911213   47608 logs.go:276] 1 containers: [18f508cd43779fa9b03e93445f4d03abfe5f3e6291dc05cf116f16714d04ec96]
	I0229 19:07:05.911283   47608 ssh_runner.go:195] Run: which crictl
	I0229 19:07:05.917051   47608 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:07:05.917127   47608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:07:05.958278   47608 cri.go:89] found id: "795516eef7b679f1eefaf293eab6b1683326e3fcb3f4e5f848194e4a7ab1566e"
	I0229 19:07:05.958304   47608 cri.go:89] found id: ""
	I0229 19:07:05.958312   47608 logs.go:276] 1 containers: [795516eef7b679f1eefaf293eab6b1683326e3fcb3f4e5f848194e4a7ab1566e]
	I0229 19:07:05.958366   47608 ssh_runner.go:195] Run: which crictl
	I0229 19:07:05.963467   47608 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:07:05.963538   47608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:07:06.003497   47608 cri.go:89] found id: "7220454898e1219d80ac33161ddc4116866d5cf1d1382767c1bf456cfa6b3c72"
	I0229 19:07:06.003516   47608 cri.go:89] found id: ""
	I0229 19:07:06.003525   47608 logs.go:276] 1 containers: [7220454898e1219d80ac33161ddc4116866d5cf1d1382767c1bf456cfa6b3c72]
	I0229 19:07:06.003578   47608 ssh_runner.go:195] Run: which crictl
	I0229 19:07:06.008829   47608 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:07:06.008900   47608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:07:06.048632   47608 cri.go:89] found id: "f1accc151694b40814505327a0743f46796dacf516f08043f58b1c42147e25fe"
	I0229 19:07:06.048654   47608 cri.go:89] found id: ""
	I0229 19:07:06.048662   47608 logs.go:276] 1 containers: [f1accc151694b40814505327a0743f46796dacf516f08043f58b1c42147e25fe]
	I0229 19:07:06.048719   47608 ssh_runner.go:195] Run: which crictl
	I0229 19:07:06.053674   47608 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:07:06.053725   47608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:07:06.095377   47608 cri.go:89] found id: "3327a9756b71a79f82db55da77bd2297066ea76c84094764ff2fcbd04e4f528d"
	I0229 19:07:06.095398   47608 cri.go:89] found id: ""
	I0229 19:07:06.095406   47608 logs.go:276] 1 containers: [3327a9756b71a79f82db55da77bd2297066ea76c84094764ff2fcbd04e4f528d]
	I0229 19:07:06.095455   47608 ssh_runner.go:195] Run: which crictl
	I0229 19:07:06.100277   47608 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:07:06.100344   47608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:07:06.141330   47608 cri.go:89] found id: "9099ab49263e57e41e6b7b28a87e810a951b495e3d0c879bf7ac045d0d2c2bd0"
	I0229 19:07:06.141351   47608 cri.go:89] found id: ""
	I0229 19:07:06.141361   47608 logs.go:276] 1 containers: [9099ab49263e57e41e6b7b28a87e810a951b495e3d0c879bf7ac045d0d2c2bd0]
	I0229 19:07:06.141418   47608 ssh_runner.go:195] Run: which crictl
	I0229 19:07:06.146628   47608 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:07:06.146675   47608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:07:06.195525   47608 cri.go:89] found id: ""
	I0229 19:07:06.195552   47608 logs.go:276] 0 containers: []
	W0229 19:07:06.195563   47608 logs.go:278] No container was found matching "kindnet"
	I0229 19:07:06.195570   47608 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 19:07:06.195626   47608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 19:07:06.242893   47608 cri.go:89] found id: "6d4d0c25cc63929278e7e44eb4cd9bc93f376b8bb76a05b78a29d9d9b9794ada"
	I0229 19:07:06.242912   47608 cri.go:89] found id: ""
	I0229 19:07:06.242918   47608 logs.go:276] 1 containers: [6d4d0c25cc63929278e7e44eb4cd9bc93f376b8bb76a05b78a29d9d9b9794ada]
	I0229 19:07:06.242963   47608 ssh_runner.go:195] Run: which crictl
	I0229 19:07:06.247876   47608 logs.go:123] Gathering logs for dmesg ...
	I0229 19:07:06.247894   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:07:06.264869   47608 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:07:06.264905   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 19:07:06.403612   47608 logs.go:123] Gathering logs for kube-apiserver [18f508cd43779fa9b03e93445f4d03abfe5f3e6291dc05cf116f16714d04ec96] ...
	I0229 19:07:06.403639   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18f508cd43779fa9b03e93445f4d03abfe5f3e6291dc05cf116f16714d04ec96"
	I0229 19:07:06.468541   47608 logs.go:123] Gathering logs for etcd [795516eef7b679f1eefaf293eab6b1683326e3fcb3f4e5f848194e4a7ab1566e] ...
	I0229 19:07:06.468569   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 795516eef7b679f1eefaf293eab6b1683326e3fcb3f4e5f848194e4a7ab1566e"
	I0229 19:07:06.523984   47608 logs.go:123] Gathering logs for kube-proxy [3327a9756b71a79f82db55da77bd2297066ea76c84094764ff2fcbd04e4f528d] ...
	I0229 19:07:06.524016   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3327a9756b71a79f82db55da77bd2297066ea76c84094764ff2fcbd04e4f528d"
	I0229 19:07:06.599105   47608 logs.go:123] Gathering logs for kube-controller-manager [9099ab49263e57e41e6b7b28a87e810a951b495e3d0c879bf7ac045d0d2c2bd0] ...
	I0229 19:07:06.599133   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9099ab49263e57e41e6b7b28a87e810a951b495e3d0c879bf7ac045d0d2c2bd0"
	I0229 19:07:06.672044   47608 logs.go:123] Gathering logs for kubelet ...
	I0229 19:07:06.672074   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:07:06.772478   47608 logs.go:123] Gathering logs for coredns [7220454898e1219d80ac33161ddc4116866d5cf1d1382767c1bf456cfa6b3c72] ...
	I0229 19:07:06.772509   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7220454898e1219d80ac33161ddc4116866d5cf1d1382767c1bf456cfa6b3c72"
	I0229 19:07:06.817949   47608 logs.go:123] Gathering logs for kube-scheduler [f1accc151694b40814505327a0743f46796dacf516f08043f58b1c42147e25fe] ...
	I0229 19:07:06.817978   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1accc151694b40814505327a0743f46796dacf516f08043f58b1c42147e25fe"
	I0229 19:07:06.866713   47608 logs.go:123] Gathering logs for storage-provisioner [6d4d0c25cc63929278e7e44eb4cd9bc93f376b8bb76a05b78a29d9d9b9794ada] ...
	I0229 19:07:06.866743   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d4d0c25cc63929278e7e44eb4cd9bc93f376b8bb76a05b78a29d9d9b9794ada"
	I0229 19:07:06.912206   47608 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:07:06.912234   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:07:07.320100   47608 logs.go:123] Gathering logs for container status ...
	I0229 19:07:07.320136   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:07:09.875603   47608 api_server.go:253] Checking apiserver healthz at https://192.168.61.34:8443/healthz ...
	I0229 19:07:09.884525   47608 api_server.go:279] https://192.168.61.34:8443/healthz returned 200:
	ok
	I0229 19:07:09.886045   47608 api_server.go:141] control plane version: v1.28.4
	I0229 19:07:09.886063   47608 api_server.go:131] duration metric: took 4.017895877s to wait for apiserver health ...
	I0229 19:07:09.886071   47608 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 19:07:09.886091   47608 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:07:09.886137   47608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:07:09.940809   47608 cri.go:89] found id: "18f508cd43779fa9b03e93445f4d03abfe5f3e6291dc05cf116f16714d04ec96"
	I0229 19:07:09.940831   47608 cri.go:89] found id: ""
	I0229 19:07:09.940838   47608 logs.go:276] 1 containers: [18f508cd43779fa9b03e93445f4d03abfe5f3e6291dc05cf116f16714d04ec96]
	I0229 19:07:09.940901   47608 ssh_runner.go:195] Run: which crictl
	I0229 19:07:09.945610   47608 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:07:09.945668   47608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:07:09.995270   47608 cri.go:89] found id: "795516eef7b679f1eefaf293eab6b1683326e3fcb3f4e5f848194e4a7ab1566e"
	I0229 19:07:09.995291   47608 cri.go:89] found id: ""
	I0229 19:07:09.995299   47608 logs.go:276] 1 containers: [795516eef7b679f1eefaf293eab6b1683326e3fcb3f4e5f848194e4a7ab1566e]
	I0229 19:07:09.995353   47608 ssh_runner.go:195] Run: which crictl
	I0229 19:07:10.000358   47608 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:07:10.000431   47608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:07:10.052073   47608 cri.go:89] found id: "7220454898e1219d80ac33161ddc4116866d5cf1d1382767c1bf456cfa6b3c72"
	I0229 19:07:10.052094   47608 cri.go:89] found id: ""
	I0229 19:07:10.052103   47608 logs.go:276] 1 containers: [7220454898e1219d80ac33161ddc4116866d5cf1d1382767c1bf456cfa6b3c72]
	I0229 19:07:10.052164   47608 ssh_runner.go:195] Run: which crictl
	I0229 19:07:10.058993   47608 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:07:10.059071   47608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:07:10.110467   47608 cri.go:89] found id: "f1accc151694b40814505327a0743f46796dacf516f08043f58b1c42147e25fe"
	I0229 19:07:10.110494   47608 cri.go:89] found id: ""
	I0229 19:07:10.110501   47608 logs.go:276] 1 containers: [f1accc151694b40814505327a0743f46796dacf516f08043f58b1c42147e25fe]
	I0229 19:07:10.110556   47608 ssh_runner.go:195] Run: which crictl
	I0229 19:07:10.115491   47608 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:07:10.115545   47608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:07:10.159522   47608 cri.go:89] found id: "3327a9756b71a79f82db55da77bd2297066ea76c84094764ff2fcbd04e4f528d"
	I0229 19:07:10.159540   47608 cri.go:89] found id: ""
	I0229 19:07:10.159548   47608 logs.go:276] 1 containers: [3327a9756b71a79f82db55da77bd2297066ea76c84094764ff2fcbd04e4f528d]
	I0229 19:07:10.159602   47608 ssh_runner.go:195] Run: which crictl
	I0229 19:07:10.164162   47608 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:07:10.164223   47608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:07:10.204583   47608 cri.go:89] found id: "9099ab49263e57e41e6b7b28a87e810a951b495e3d0c879bf7ac045d0d2c2bd0"
	I0229 19:07:10.204602   47608 cri.go:89] found id: ""
	I0229 19:07:10.204623   47608 logs.go:276] 1 containers: [9099ab49263e57e41e6b7b28a87e810a951b495e3d0c879bf7ac045d0d2c2bd0]
	I0229 19:07:10.204699   47608 ssh_runner.go:195] Run: which crictl
	I0229 19:07:10.209550   47608 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:07:10.209602   47608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:07:10.246884   47608 cri.go:89] found id: ""
	I0229 19:07:10.246907   47608 logs.go:276] 0 containers: []
	W0229 19:07:10.246915   47608 logs.go:278] No container was found matching "kindnet"
	I0229 19:07:10.246925   47608 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 19:07:10.246970   47608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 19:07:10.142347   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:12.142912   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:10.286397   47608 cri.go:89] found id: "6d4d0c25cc63929278e7e44eb4cd9bc93f376b8bb76a05b78a29d9d9b9794ada"
	I0229 19:07:10.286420   47608 cri.go:89] found id: ""
	I0229 19:07:10.286429   47608 logs.go:276] 1 containers: [6d4d0c25cc63929278e7e44eb4cd9bc93f376b8bb76a05b78a29d9d9b9794ada]
	I0229 19:07:10.286476   47608 ssh_runner.go:195] Run: which crictl
	I0229 19:07:10.292279   47608 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:07:10.292303   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 19:07:10.432648   47608 logs.go:123] Gathering logs for kube-apiserver [18f508cd43779fa9b03e93445f4d03abfe5f3e6291dc05cf116f16714d04ec96] ...
	I0229 19:07:10.432683   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18f508cd43779fa9b03e93445f4d03abfe5f3e6291dc05cf116f16714d04ec96"
	I0229 19:07:10.485438   47608 logs.go:123] Gathering logs for etcd [795516eef7b679f1eefaf293eab6b1683326e3fcb3f4e5f848194e4a7ab1566e] ...
	I0229 19:07:10.485468   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 795516eef7b679f1eefaf293eab6b1683326e3fcb3f4e5f848194e4a7ab1566e"
	I0229 19:07:10.532671   47608 logs.go:123] Gathering logs for coredns [7220454898e1219d80ac33161ddc4116866d5cf1d1382767c1bf456cfa6b3c72] ...
	I0229 19:07:10.532702   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7220454898e1219d80ac33161ddc4116866d5cf1d1382767c1bf456cfa6b3c72"
	I0229 19:07:10.574743   47608 logs.go:123] Gathering logs for kube-scheduler [f1accc151694b40814505327a0743f46796dacf516f08043f58b1c42147e25fe] ...
	I0229 19:07:10.574768   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1accc151694b40814505327a0743f46796dacf516f08043f58b1c42147e25fe"
	I0229 19:07:10.625137   47608 logs.go:123] Gathering logs for kube-proxy [3327a9756b71a79f82db55da77bd2297066ea76c84094764ff2fcbd04e4f528d] ...
	I0229 19:07:10.625164   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3327a9756b71a79f82db55da77bd2297066ea76c84094764ff2fcbd04e4f528d"
	I0229 19:07:10.669432   47608 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:07:10.669457   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:07:11.008876   47608 logs.go:123] Gathering logs for container status ...
	I0229 19:07:11.008906   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:07:11.060752   47608 logs.go:123] Gathering logs for kubelet ...
	I0229 19:07:11.060785   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:07:11.167311   47608 logs.go:123] Gathering logs for dmesg ...
	I0229 19:07:11.167344   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:07:11.185133   47608 logs.go:123] Gathering logs for kube-controller-manager [9099ab49263e57e41e6b7b28a87e810a951b495e3d0c879bf7ac045d0d2c2bd0] ...
	I0229 19:07:11.185160   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9099ab49263e57e41e6b7b28a87e810a951b495e3d0c879bf7ac045d0d2c2bd0"
	I0229 19:07:11.251587   47608 logs.go:123] Gathering logs for storage-provisioner [6d4d0c25cc63929278e7e44eb4cd9bc93f376b8bb76a05b78a29d9d9b9794ada] ...
	I0229 19:07:11.251614   47608 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d4d0c25cc63929278e7e44eb4cd9bc93f376b8bb76a05b78a29d9d9b9794ada"
	I0229 19:07:13.809877   47608 system_pods.go:59] 8 kube-system pods found
	I0229 19:07:13.809904   47608 system_pods.go:61] "coredns-5dd5756b68-nth8z" [eeec9c32-9f61-4cb7-b1fb-3dd75c5af668] Running
	I0229 19:07:13.809910   47608 system_pods.go:61] "etcd-embed-certs-991128" [59422cbb-1dd9-49de-8a33-5722c44673db] Running
	I0229 19:07:13.809915   47608 system_pods.go:61] "kube-apiserver-embed-certs-991128" [7575302f-597d-4ffc-9411-12fa4e1d4e8d] Running
	I0229 19:07:13.809920   47608 system_pods.go:61] "kube-controller-manager-embed-certs-991128" [e9cbc6cc-5910-4807-95dd-3ec88a184ec2] Running
	I0229 19:07:13.809924   47608 system_pods.go:61] "kube-proxy-5grst" [35524449-8c5a-440d-a45f-ce631ebff076] Running
	I0229 19:07:13.809928   47608 system_pods.go:61] "kube-scheduler-embed-certs-991128" [e95aeb48-8783-4620-89e0-7454e9cd251d] Running
	I0229 19:07:13.809937   47608 system_pods.go:61] "metrics-server-57f55c9bc5-r66xw" [8eb63357-6b36-49f3-98a5-c74bb4a9b09c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 19:07:13.809945   47608 system_pods.go:61] "storage-provisioner" [a9ce642e-81dc-4dd7-be8e-3796e19f8f03] Running
	I0229 19:07:13.809957   47608 system_pods.go:74] duration metric: took 3.923878638s to wait for pod list to return data ...
	I0229 19:07:13.809967   47608 default_sa.go:34] waiting for default service account to be created ...
	I0229 19:07:13.814425   47608 default_sa.go:45] found service account: "default"
	I0229 19:07:13.814451   47608 default_sa.go:55] duration metric: took 4.476391ms for default service account to be created ...
	I0229 19:07:13.814463   47608 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 19:07:13.822812   47608 system_pods.go:86] 8 kube-system pods found
	I0229 19:07:13.822834   47608 system_pods.go:89] "coredns-5dd5756b68-nth8z" [eeec9c32-9f61-4cb7-b1fb-3dd75c5af668] Running
	I0229 19:07:13.822842   47608 system_pods.go:89] "etcd-embed-certs-991128" [59422cbb-1dd9-49de-8a33-5722c44673db] Running
	I0229 19:07:13.822849   47608 system_pods.go:89] "kube-apiserver-embed-certs-991128" [7575302f-597d-4ffc-9411-12fa4e1d4e8d] Running
	I0229 19:07:13.822856   47608 system_pods.go:89] "kube-controller-manager-embed-certs-991128" [e9cbc6cc-5910-4807-95dd-3ec88a184ec2] Running
	I0229 19:07:13.822864   47608 system_pods.go:89] "kube-proxy-5grst" [35524449-8c5a-440d-a45f-ce631ebff076] Running
	I0229 19:07:13.822871   47608 system_pods.go:89] "kube-scheduler-embed-certs-991128" [e95aeb48-8783-4620-89e0-7454e9cd251d] Running
	I0229 19:07:13.822883   47608 system_pods.go:89] "metrics-server-57f55c9bc5-r66xw" [8eb63357-6b36-49f3-98a5-c74bb4a9b09c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 19:07:13.822893   47608 system_pods.go:89] "storage-provisioner" [a9ce642e-81dc-4dd7-be8e-3796e19f8f03] Running
	I0229 19:07:13.822908   47608 system_pods.go:126] duration metric: took 8.437411ms to wait for k8s-apps to be running ...
	I0229 19:07:13.822919   47608 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 19:07:13.822973   47608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 19:07:13.841166   47608 system_svc.go:56] duration metric: took 18.240886ms WaitForService to wait for kubelet.
	I0229 19:07:13.841190   47608 kubeadm.go:581] duration metric: took 4m14.94505166s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 19:07:13.841213   47608 node_conditions.go:102] verifying NodePressure condition ...
	I0229 19:07:13.844369   47608 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 19:07:13.844393   47608 node_conditions.go:123] node cpu capacity is 2
	I0229 19:07:13.844404   47608 node_conditions.go:105] duration metric: took 3.186855ms to run NodePressure ...
	I0229 19:07:13.844416   47608 start.go:228] waiting for startup goroutines ...
	I0229 19:07:13.844425   47608 start.go:233] waiting for cluster config update ...
	I0229 19:07:13.844438   47608 start.go:242] writing updated cluster config ...
	I0229 19:07:13.844737   47608 ssh_runner.go:195] Run: rm -f paused
	I0229 19:07:13.894129   47608 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0229 19:07:13.896056   47608 out.go:177] * Done! kubectl is now configured to use "embed-certs-991128" cluster and "default" namespace by default
	I0229 19:07:10.367615   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:12.866425   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:14.145357   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:16.642943   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:14.867561   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:17.366556   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:19.143410   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:21.147970   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:19.367285   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:21.865048   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:23.868674   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:23.643039   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:25.643205   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:27.643525   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:25.869656   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:28.369270   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:30.142250   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:32.142304   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:30.865630   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:32.870509   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:34.143254   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:36.645374   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:35.367229   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:37.865920   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:38.646004   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:41.146450   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:40.368452   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:42.866110   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:43.643363   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:45.643443   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:47.644208   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:44.868350   48088 pod_ready.go:102] pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:45.865595   48088 pod_ready.go:81] duration metric: took 4m0.007156363s waiting for pod "metrics-server-57f55c9bc5-v95ws" in "kube-system" namespace to be "Ready" ...
	E0229 19:07:45.865618   48088 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0229 19:07:45.865628   48088 pod_ready.go:38] duration metric: took 4m1.182191329s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 19:07:45.865647   48088 api_server.go:52] waiting for apiserver process to appear ...
	I0229 19:07:45.865681   48088 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:07:45.865737   48088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:07:45.924104   48088 cri.go:89] found id: "afb68f5e908cebe734b371147974874028af491c708e5cb5198d1f84d7c1a1ec"
	I0229 19:07:45.924127   48088 cri.go:89] found id: ""
	I0229 19:07:45.924136   48088 logs.go:276] 1 containers: [afb68f5e908cebe734b371147974874028af491c708e5cb5198d1f84d7c1a1ec]
	I0229 19:07:45.924194   48088 ssh_runner.go:195] Run: which crictl
	I0229 19:07:45.929769   48088 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:07:45.929823   48088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:07:45.973018   48088 cri.go:89] found id: "ea63327422de95ad071cbb5dd8bd32fb4b052a80e014485ef75bf233da53bebf"
	I0229 19:07:45.973039   48088 cri.go:89] found id: ""
	I0229 19:07:45.973048   48088 logs.go:276] 1 containers: [ea63327422de95ad071cbb5dd8bd32fb4b052a80e014485ef75bf233da53bebf]
	I0229 19:07:45.973102   48088 ssh_runner.go:195] Run: which crictl
	I0229 19:07:45.978222   48088 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:07:45.978284   48088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:07:46.019965   48088 cri.go:89] found id: "f3783ae6a7523e08992c41219eb196d6d50ec4a3033a63f5ac801053aac04cc3"
	I0229 19:07:46.019984   48088 cri.go:89] found id: ""
	I0229 19:07:46.019991   48088 logs.go:276] 1 containers: [f3783ae6a7523e08992c41219eb196d6d50ec4a3033a63f5ac801053aac04cc3]
	I0229 19:07:46.020046   48088 ssh_runner.go:195] Run: which crictl
	I0229 19:07:46.024852   48088 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:07:46.024909   48088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:07:46.067904   48088 cri.go:89] found id: "7ad8f5f1b340cedf651a9032d3eb4e30640730f4284611f933e96f4ecf76ecff"
	I0229 19:07:46.067921   48088 cri.go:89] found id: ""
	I0229 19:07:46.067928   48088 logs.go:276] 1 containers: [7ad8f5f1b340cedf651a9032d3eb4e30640730f4284611f933e96f4ecf76ecff]
	I0229 19:07:46.067970   48088 ssh_runner.go:195] Run: which crictl
	I0229 19:07:46.073790   48088 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:07:46.073855   48088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:07:46.113273   48088 cri.go:89] found id: "66a474fccaab4249c196a12339ffecf4b3a4a079d53db11cc6f1b249574da09f"
	I0229 19:07:46.113299   48088 cri.go:89] found id: ""
	I0229 19:07:46.113320   48088 logs.go:276] 1 containers: [66a474fccaab4249c196a12339ffecf4b3a4a079d53db11cc6f1b249574da09f]
	I0229 19:07:46.113375   48088 ssh_runner.go:195] Run: which crictl
	I0229 19:07:46.118626   48088 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:07:46.118692   48088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:07:46.169986   48088 cri.go:89] found id: "f9076d6488b1c81a82f5384182b068e3cd06217027b2030717efdbf4b6df76a3"
	I0229 19:07:46.170008   48088 cri.go:89] found id: ""
	I0229 19:07:46.170017   48088 logs.go:276] 1 containers: [f9076d6488b1c81a82f5384182b068e3cd06217027b2030717efdbf4b6df76a3]
	I0229 19:07:46.170065   48088 ssh_runner.go:195] Run: which crictl
	I0229 19:07:46.175639   48088 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:07:46.175699   48088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:07:46.220353   48088 cri.go:89] found id: ""
	I0229 19:07:46.220383   48088 logs.go:276] 0 containers: []
	W0229 19:07:46.220394   48088 logs.go:278] No container was found matching "kindnet"
	I0229 19:07:46.220402   48088 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 19:07:46.220460   48088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 19:07:46.267009   48088 cri.go:89] found id: "dd100a6a78ff38195f8c50e042b5378c2cd6e061b69c596fd75b45030c7abb3f"
	I0229 19:07:46.267045   48088 cri.go:89] found id: ""
	I0229 19:07:46.267055   48088 logs.go:276] 1 containers: [dd100a6a78ff38195f8c50e042b5378c2cd6e061b69c596fd75b45030c7abb3f]
	I0229 19:07:46.267105   48088 ssh_runner.go:195] Run: which crictl
	I0229 19:07:46.272422   48088 logs.go:123] Gathering logs for kube-controller-manager [f9076d6488b1c81a82f5384182b068e3cd06217027b2030717efdbf4b6df76a3] ...
	I0229 19:07:46.272445   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9076d6488b1c81a82f5384182b068e3cd06217027b2030717efdbf4b6df76a3"
	I0229 19:07:46.337524   48088 logs.go:123] Gathering logs for kubelet ...
	I0229 19:07:46.337554   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:07:46.454444   48088 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:07:46.454484   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 19:07:46.601211   48088 logs.go:123] Gathering logs for kube-apiserver [afb68f5e908cebe734b371147974874028af491c708e5cb5198d1f84d7c1a1ec] ...
	I0229 19:07:46.601239   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 afb68f5e908cebe734b371147974874028af491c708e5cb5198d1f84d7c1a1ec"
	I0229 19:07:46.661763   48088 logs.go:123] Gathering logs for coredns [f3783ae6a7523e08992c41219eb196d6d50ec4a3033a63f5ac801053aac04cc3] ...
	I0229 19:07:46.661794   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f3783ae6a7523e08992c41219eb196d6d50ec4a3033a63f5ac801053aac04cc3"
	I0229 19:07:46.707569   48088 logs.go:123] Gathering logs for kube-scheduler [7ad8f5f1b340cedf651a9032d3eb4e30640730f4284611f933e96f4ecf76ecff] ...
	I0229 19:07:46.707594   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7ad8f5f1b340cedf651a9032d3eb4e30640730f4284611f933e96f4ecf76ecff"
	I0229 19:07:46.774076   48088 logs.go:123] Gathering logs for kube-proxy [66a474fccaab4249c196a12339ffecf4b3a4a079d53db11cc6f1b249574da09f] ...
	I0229 19:07:46.774107   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66a474fccaab4249c196a12339ffecf4b3a4a079d53db11cc6f1b249574da09f"
	I0229 19:07:46.821259   48088 logs.go:123] Gathering logs for dmesg ...
	I0229 19:07:46.821288   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:07:46.837496   48088 logs.go:123] Gathering logs for etcd [ea63327422de95ad071cbb5dd8bd32fb4b052a80e014485ef75bf233da53bebf] ...
	I0229 19:07:46.837519   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea63327422de95ad071cbb5dd8bd32fb4b052a80e014485ef75bf233da53bebf"
	I0229 19:07:46.890812   48088 logs.go:123] Gathering logs for storage-provisioner [dd100a6a78ff38195f8c50e042b5378c2cd6e061b69c596fd75b45030c7abb3f] ...
	I0229 19:07:46.890841   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dd100a6a78ff38195f8c50e042b5378c2cd6e061b69c596fd75b45030c7abb3f"
	I0229 19:07:46.934532   48088 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:07:46.934559   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:07:47.395235   48088 logs.go:123] Gathering logs for container status ...
	I0229 19:07:47.395269   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:07:50.144146   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:52.144673   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:49.959190   48088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:07:49.978381   48088 api_server.go:72] duration metric: took 4m7.681437754s to wait for apiserver process to appear ...
	I0229 19:07:49.978407   48088 api_server.go:88] waiting for apiserver healthz status ...
	I0229 19:07:49.978464   48088 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:07:49.978513   48088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:07:50.028150   48088 cri.go:89] found id: "afb68f5e908cebe734b371147974874028af491c708e5cb5198d1f84d7c1a1ec"
	I0229 19:07:50.028176   48088 cri.go:89] found id: ""
	I0229 19:07:50.028186   48088 logs.go:276] 1 containers: [afb68f5e908cebe734b371147974874028af491c708e5cb5198d1f84d7c1a1ec]
	I0229 19:07:50.028242   48088 ssh_runner.go:195] Run: which crictl
	I0229 19:07:50.033649   48088 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:07:50.033719   48088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:07:50.083761   48088 cri.go:89] found id: "ea63327422de95ad071cbb5dd8bd32fb4b052a80e014485ef75bf233da53bebf"
	I0229 19:07:50.083785   48088 cri.go:89] found id: ""
	I0229 19:07:50.083795   48088 logs.go:276] 1 containers: [ea63327422de95ad071cbb5dd8bd32fb4b052a80e014485ef75bf233da53bebf]
	I0229 19:07:50.083866   48088 ssh_runner.go:195] Run: which crictl
	I0229 19:07:50.088829   48088 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:07:50.088913   48088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:07:50.138098   48088 cri.go:89] found id: "f3783ae6a7523e08992c41219eb196d6d50ec4a3033a63f5ac801053aac04cc3"
	I0229 19:07:50.138120   48088 cri.go:89] found id: ""
	I0229 19:07:50.138148   48088 logs.go:276] 1 containers: [f3783ae6a7523e08992c41219eb196d6d50ec4a3033a63f5ac801053aac04cc3]
	I0229 19:07:50.138203   48088 ssh_runner.go:195] Run: which crictl
	I0229 19:07:50.143751   48088 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:07:50.143824   48088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:07:50.181953   48088 cri.go:89] found id: "7ad8f5f1b340cedf651a9032d3eb4e30640730f4284611f933e96f4ecf76ecff"
	I0229 19:07:50.181973   48088 cri.go:89] found id: ""
	I0229 19:07:50.182005   48088 logs.go:276] 1 containers: [7ad8f5f1b340cedf651a9032d3eb4e30640730f4284611f933e96f4ecf76ecff]
	I0229 19:07:50.182061   48088 ssh_runner.go:195] Run: which crictl
	I0229 19:07:50.187673   48088 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:07:50.187738   48088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:07:50.239764   48088 cri.go:89] found id: "66a474fccaab4249c196a12339ffecf4b3a4a079d53db11cc6f1b249574da09f"
	I0229 19:07:50.239787   48088 cri.go:89] found id: ""
	I0229 19:07:50.239797   48088 logs.go:276] 1 containers: [66a474fccaab4249c196a12339ffecf4b3a4a079d53db11cc6f1b249574da09f]
	I0229 19:07:50.239945   48088 ssh_runner.go:195] Run: which crictl
	I0229 19:07:50.244916   48088 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:07:50.244980   48088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:07:50.285741   48088 cri.go:89] found id: "f9076d6488b1c81a82f5384182b068e3cd06217027b2030717efdbf4b6df76a3"
	I0229 19:07:50.285764   48088 cri.go:89] found id: ""
	I0229 19:07:50.285774   48088 logs.go:276] 1 containers: [f9076d6488b1c81a82f5384182b068e3cd06217027b2030717efdbf4b6df76a3]
	I0229 19:07:50.285833   48088 ssh_runner.go:195] Run: which crictl
	I0229 19:07:50.290537   48088 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:07:50.290607   48088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:07:50.334081   48088 cri.go:89] found id: ""
	I0229 19:07:50.334113   48088 logs.go:276] 0 containers: []
	W0229 19:07:50.334125   48088 logs.go:278] No container was found matching "kindnet"
	I0229 19:07:50.334133   48088 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 19:07:50.334218   48088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 19:07:50.382210   48088 cri.go:89] found id: "dd100a6a78ff38195f8c50e042b5378c2cd6e061b69c596fd75b45030c7abb3f"
	I0229 19:07:50.382240   48088 cri.go:89] found id: ""
	I0229 19:07:50.382249   48088 logs.go:276] 1 containers: [dd100a6a78ff38195f8c50e042b5378c2cd6e061b69c596fd75b45030c7abb3f]
	I0229 19:07:50.382309   48088 ssh_runner.go:195] Run: which crictl
	I0229 19:07:50.387638   48088 logs.go:123] Gathering logs for dmesg ...
	I0229 19:07:50.387659   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:07:50.402846   48088 logs.go:123] Gathering logs for kube-proxy [66a474fccaab4249c196a12339ffecf4b3a4a079d53db11cc6f1b249574da09f] ...
	I0229 19:07:50.402871   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66a474fccaab4249c196a12339ffecf4b3a4a079d53db11cc6f1b249574da09f"
	I0229 19:07:50.449452   48088 logs.go:123] Gathering logs for kube-controller-manager [f9076d6488b1c81a82f5384182b068e3cd06217027b2030717efdbf4b6df76a3] ...
	I0229 19:07:50.449484   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9076d6488b1c81a82f5384182b068e3cd06217027b2030717efdbf4b6df76a3"
	I0229 19:07:50.503887   48088 logs.go:123] Gathering logs for storage-provisioner [dd100a6a78ff38195f8c50e042b5378c2cd6e061b69c596fd75b45030c7abb3f] ...
	I0229 19:07:50.503921   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dd100a6a78ff38195f8c50e042b5378c2cd6e061b69c596fd75b45030c7abb3f"
	I0229 19:07:50.545549   48088 logs.go:123] Gathering logs for container status ...
	I0229 19:07:50.545620   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:07:50.607117   48088 logs.go:123] Gathering logs for kubelet ...
	I0229 19:07:50.607144   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:07:50.711241   48088 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:07:50.711302   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 19:07:50.857588   48088 logs.go:123] Gathering logs for kube-apiserver [afb68f5e908cebe734b371147974874028af491c708e5cb5198d1f84d7c1a1ec] ...
	I0229 19:07:50.857622   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 afb68f5e908cebe734b371147974874028af491c708e5cb5198d1f84d7c1a1ec"
	I0229 19:07:50.912908   48088 logs.go:123] Gathering logs for etcd [ea63327422de95ad071cbb5dd8bd32fb4b052a80e014485ef75bf233da53bebf] ...
	I0229 19:07:50.912943   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea63327422de95ad071cbb5dd8bd32fb4b052a80e014485ef75bf233da53bebf"
	I0229 19:07:50.958888   48088 logs.go:123] Gathering logs for coredns [f3783ae6a7523e08992c41219eb196d6d50ec4a3033a63f5ac801053aac04cc3] ...
	I0229 19:07:50.958918   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f3783ae6a7523e08992c41219eb196d6d50ec4a3033a63f5ac801053aac04cc3"
	I0229 19:07:51.008029   48088 logs.go:123] Gathering logs for kube-scheduler [7ad8f5f1b340cedf651a9032d3eb4e30640730f4284611f933e96f4ecf76ecff] ...
	I0229 19:07:51.008059   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7ad8f5f1b340cedf651a9032d3eb4e30640730f4284611f933e96f4ecf76ecff"
	I0229 19:07:51.064227   48088 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:07:51.064262   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:07:53.940284   48088 api_server.go:253] Checking apiserver healthz at https://192.168.39.210:8444/healthz ...
	I0229 19:07:53.945473   48088 api_server.go:279] https://192.168.39.210:8444/healthz returned 200:
	ok
	I0229 19:07:53.946909   48088 api_server.go:141] control plane version: v1.28.4
	I0229 19:07:53.946925   48088 api_server.go:131] duration metric: took 3.968511547s to wait for apiserver health ...
	I0229 19:07:53.946938   48088 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 19:07:53.946958   48088 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:07:53.947009   48088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:07:53.996337   48088 cri.go:89] found id: "afb68f5e908cebe734b371147974874028af491c708e5cb5198d1f84d7c1a1ec"
	I0229 19:07:53.996357   48088 cri.go:89] found id: ""
	I0229 19:07:53.996364   48088 logs.go:276] 1 containers: [afb68f5e908cebe734b371147974874028af491c708e5cb5198d1f84d7c1a1ec]
	I0229 19:07:53.996409   48088 ssh_runner.go:195] Run: which crictl
	I0229 19:07:54.001386   48088 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:07:54.001465   48088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:07:54.051794   48088 cri.go:89] found id: "ea63327422de95ad071cbb5dd8bd32fb4b052a80e014485ef75bf233da53bebf"
	I0229 19:07:54.051814   48088 cri.go:89] found id: ""
	I0229 19:07:54.051821   48088 logs.go:276] 1 containers: [ea63327422de95ad071cbb5dd8bd32fb4b052a80e014485ef75bf233da53bebf]
	I0229 19:07:54.051869   48088 ssh_runner.go:195] Run: which crictl
	I0229 19:07:54.057560   48088 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:07:54.057631   48088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:07:54.110088   48088 cri.go:89] found id: "f3783ae6a7523e08992c41219eb196d6d50ec4a3033a63f5ac801053aac04cc3"
	I0229 19:07:54.110105   48088 cri.go:89] found id: ""
	I0229 19:07:54.110113   48088 logs.go:276] 1 containers: [f3783ae6a7523e08992c41219eb196d6d50ec4a3033a63f5ac801053aac04cc3]
	I0229 19:07:54.110156   48088 ssh_runner.go:195] Run: which crictl
	I0229 19:07:54.115737   48088 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:07:54.115800   48088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:07:54.162820   48088 cri.go:89] found id: "7ad8f5f1b340cedf651a9032d3eb4e30640730f4284611f933e96f4ecf76ecff"
	I0229 19:07:54.162842   48088 cri.go:89] found id: ""
	I0229 19:07:54.162850   48088 logs.go:276] 1 containers: [7ad8f5f1b340cedf651a9032d3eb4e30640730f4284611f933e96f4ecf76ecff]
	I0229 19:07:54.162899   48088 ssh_runner.go:195] Run: which crictl
	I0229 19:07:54.168740   48088 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:07:54.168795   48088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:07:54.210577   48088 cri.go:89] found id: "66a474fccaab4249c196a12339ffecf4b3a4a079d53db11cc6f1b249574da09f"
	I0229 19:07:54.210617   48088 cri.go:89] found id: ""
	I0229 19:07:54.210625   48088 logs.go:276] 1 containers: [66a474fccaab4249c196a12339ffecf4b3a4a079d53db11cc6f1b249574da09f]
	I0229 19:07:54.210673   48088 ssh_runner.go:195] Run: which crictl
	I0229 19:07:54.216266   48088 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:07:54.216317   48088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:07:54.255416   48088 cri.go:89] found id: "f9076d6488b1c81a82f5384182b068e3cd06217027b2030717efdbf4b6df76a3"
	I0229 19:07:54.255442   48088 cri.go:89] found id: ""
	I0229 19:07:54.255451   48088 logs.go:276] 1 containers: [f9076d6488b1c81a82f5384182b068e3cd06217027b2030717efdbf4b6df76a3]
	I0229 19:07:54.255511   48088 ssh_runner.go:195] Run: which crictl
	I0229 19:07:54.260522   48088 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:07:54.260585   48088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:07:54.645279   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:57.144190   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:07:54.309825   48088 cri.go:89] found id: ""
	I0229 19:07:54.309861   48088 logs.go:276] 0 containers: []
	W0229 19:07:54.309873   48088 logs.go:278] No container was found matching "kindnet"
	I0229 19:07:54.309881   48088 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 19:07:54.309950   48088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 19:07:54.353200   48088 cri.go:89] found id: "dd100a6a78ff38195f8c50e042b5378c2cd6e061b69c596fd75b45030c7abb3f"
	I0229 19:07:54.353219   48088 cri.go:89] found id: ""
	I0229 19:07:54.353225   48088 logs.go:276] 1 containers: [dd100a6a78ff38195f8c50e042b5378c2cd6e061b69c596fd75b45030c7abb3f]
	I0229 19:07:54.353278   48088 ssh_runner.go:195] Run: which crictl
	I0229 19:07:54.357943   48088 logs.go:123] Gathering logs for kubelet ...
	I0229 19:07:54.357965   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:07:54.456867   48088 logs.go:123] Gathering logs for dmesg ...
	I0229 19:07:54.456901   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:07:54.474633   48088 logs.go:123] Gathering logs for kube-apiserver [afb68f5e908cebe734b371147974874028af491c708e5cb5198d1f84d7c1a1ec] ...
	I0229 19:07:54.474659   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 afb68f5e908cebe734b371147974874028af491c708e5cb5198d1f84d7c1a1ec"
	I0229 19:07:54.538218   48088 logs.go:123] Gathering logs for etcd [ea63327422de95ad071cbb5dd8bd32fb4b052a80e014485ef75bf233da53bebf] ...
	I0229 19:07:54.538256   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea63327422de95ad071cbb5dd8bd32fb4b052a80e014485ef75bf233da53bebf"
	I0229 19:07:54.591570   48088 logs.go:123] Gathering logs for container status ...
	I0229 19:07:54.591607   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:07:54.643603   48088 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:07:54.643638   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 19:07:54.787255   48088 logs.go:123] Gathering logs for coredns [f3783ae6a7523e08992c41219eb196d6d50ec4a3033a63f5ac801053aac04cc3] ...
	I0229 19:07:54.787284   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f3783ae6a7523e08992c41219eb196d6d50ec4a3033a63f5ac801053aac04cc3"
	I0229 19:07:54.836816   48088 logs.go:123] Gathering logs for kube-scheduler [7ad8f5f1b340cedf651a9032d3eb4e30640730f4284611f933e96f4ecf76ecff] ...
	I0229 19:07:54.836840   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7ad8f5f1b340cedf651a9032d3eb4e30640730f4284611f933e96f4ecf76ecff"
	I0229 19:07:54.888605   48088 logs.go:123] Gathering logs for kube-proxy [66a474fccaab4249c196a12339ffecf4b3a4a079d53db11cc6f1b249574da09f] ...
	I0229 19:07:54.888635   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66a474fccaab4249c196a12339ffecf4b3a4a079d53db11cc6f1b249574da09f"
	I0229 19:07:54.930913   48088 logs.go:123] Gathering logs for kube-controller-manager [f9076d6488b1c81a82f5384182b068e3cd06217027b2030717efdbf4b6df76a3] ...
	I0229 19:07:54.930942   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9076d6488b1c81a82f5384182b068e3cd06217027b2030717efdbf4b6df76a3"
	I0229 19:07:54.996868   48088 logs.go:123] Gathering logs for storage-provisioner [dd100a6a78ff38195f8c50e042b5378c2cd6e061b69c596fd75b45030c7abb3f] ...
	I0229 19:07:54.996904   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dd100a6a78ff38195f8c50e042b5378c2cd6e061b69c596fd75b45030c7abb3f"
	I0229 19:07:55.038936   48088 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:07:55.038975   48088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:07:57.896563   48088 system_pods.go:59] 8 kube-system pods found
	I0229 19:07:57.896600   48088 system_pods.go:61] "coredns-5dd5756b68-fmptg" [ac14ccc5-53fb-41c6-b09a-bdb801f91088] Running
	I0229 19:07:57.896607   48088 system_pods.go:61] "etcd-default-k8s-diff-port-153528" [e06d7f20-0cb4-4560-a746-eae5f366e442] Running
	I0229 19:07:57.896612   48088 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-153528" [1611b07c-d0ca-43c4-81ba-fc7c75b64a01] Running
	I0229 19:07:57.896617   48088 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-153528" [15cdd7c0-b9d9-456e-92ad-9c4de12c53df] Running
	I0229 19:07:57.896621   48088 system_pods.go:61] "kube-proxy-bvrxx" [b826c147-0486-405d-95c7-9b029349e27c] Running
	I0229 19:07:57.896625   48088 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-153528" [c08cb0c5-88da-41ea-982a-1a61e3c24107] Running
	I0229 19:07:57.896633   48088 system_pods.go:61] "metrics-server-57f55c9bc5-v95ws" [e3545189-e705-4d6e-bab6-e1eceba83c0f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 19:07:57.896641   48088 system_pods.go:61] "storage-provisioner" [0525367f-c4e1-4d3e-945b-69f408e9fcb0] Running
	I0229 19:07:57.896650   48088 system_pods.go:74] duration metric: took 3.949706328s to wait for pod list to return data ...
	I0229 19:07:57.896661   48088 default_sa.go:34] waiting for default service account to be created ...
	I0229 19:07:57.899954   48088 default_sa.go:45] found service account: "default"
	I0229 19:07:57.899982   48088 default_sa.go:55] duration metric: took 3.312049ms for default service account to be created ...
	I0229 19:07:57.899994   48088 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 19:07:57.906500   48088 system_pods.go:86] 8 kube-system pods found
	I0229 19:07:57.906535   48088 system_pods.go:89] "coredns-5dd5756b68-fmptg" [ac14ccc5-53fb-41c6-b09a-bdb801f91088] Running
	I0229 19:07:57.906545   48088 system_pods.go:89] "etcd-default-k8s-diff-port-153528" [e06d7f20-0cb4-4560-a746-eae5f366e442] Running
	I0229 19:07:57.906552   48088 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-153528" [1611b07c-d0ca-43c4-81ba-fc7c75b64a01] Running
	I0229 19:07:57.906560   48088 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-153528" [15cdd7c0-b9d9-456e-92ad-9c4de12c53df] Running
	I0229 19:07:57.906566   48088 system_pods.go:89] "kube-proxy-bvrxx" [b826c147-0486-405d-95c7-9b029349e27c] Running
	I0229 19:07:57.906572   48088 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-153528" [c08cb0c5-88da-41ea-982a-1a61e3c24107] Running
	I0229 19:07:57.906584   48088 system_pods.go:89] "metrics-server-57f55c9bc5-v95ws" [e3545189-e705-4d6e-bab6-e1eceba83c0f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 19:07:57.906599   48088 system_pods.go:89] "storage-provisioner" [0525367f-c4e1-4d3e-945b-69f408e9fcb0] Running
	I0229 19:07:57.906611   48088 system_pods.go:126] duration metric: took 6.610073ms to wait for k8s-apps to be running ...
	I0229 19:07:57.906624   48088 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 19:07:57.906684   48088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 19:07:57.928757   48088 system_svc.go:56] duration metric: took 22.126375ms WaitForService to wait for kubelet.
	I0229 19:07:57.928784   48088 kubeadm.go:581] duration metric: took 4m15.631847215s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 19:07:57.928802   48088 node_conditions.go:102] verifying NodePressure condition ...
	I0229 19:07:57.932654   48088 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 19:07:57.932673   48088 node_conditions.go:123] node cpu capacity is 2
	I0229 19:07:57.932683   48088 node_conditions.go:105] duration metric: took 3.87689ms to run NodePressure ...
	I0229 19:07:57.932693   48088 start.go:228] waiting for startup goroutines ...
	I0229 19:07:57.932700   48088 start.go:233] waiting for cluster config update ...
	I0229 19:07:57.932711   48088 start.go:242] writing updated cluster config ...
	I0229 19:07:57.932956   48088 ssh_runner.go:195] Run: rm -f paused
	I0229 19:07:57.982872   48088 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0229 19:07:57.984759   48088 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-153528" cluster and "default" namespace by default
	I0229 19:07:59.144395   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:08:01.643273   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:08:04.142449   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:08:06.145652   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:08:08.644566   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:08:11.144108   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:08:13.147164   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:08:15.646715   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:08:18.143168   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:08:20.643045   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:08:22.644969   47515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace has status "Ready":"False"
	I0229 19:08:23.142859   47515 pod_ready.go:81] duration metric: took 4m0.007407175s waiting for pod "metrics-server-57f55c9bc5-nj5h7" in "kube-system" namespace to be "Ready" ...
	E0229 19:08:23.142882   47515 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0229 19:08:23.142892   47515 pod_ready.go:38] duration metric: took 4m1.191734178s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 19:08:23.142918   47515 api_server.go:52] waiting for apiserver process to appear ...
	I0229 19:08:23.142959   47515 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:08:23.143015   47515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:08:23.200836   47515 cri.go:89] found id: "730a369e2636f53e3085f418ca1fe59278c5988e5639018d26088db3e0d29a9a"
	I0229 19:08:23.200855   47515 cri.go:89] found id: "6edf3acff7dee1fec0aa396e614c8d822a2a7e2074e2da8863c3427cda994799"
	I0229 19:08:23.200861   47515 cri.go:89] found id: ""
	I0229 19:08:23.200868   47515 logs.go:276] 2 containers: [730a369e2636f53e3085f418ca1fe59278c5988e5639018d26088db3e0d29a9a 6edf3acff7dee1fec0aa396e614c8d822a2a7e2074e2da8863c3427cda994799]
	I0229 19:08:23.200925   47515 ssh_runner.go:195] Run: which crictl
	I0229 19:08:23.206581   47515 ssh_runner.go:195] Run: which crictl
	I0229 19:08:23.211810   47515 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:08:23.211873   47515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:08:23.257499   47515 cri.go:89] found id: "3e058bfecc2b86d04151528b3bc3ea0c29858a4d4c2af94ff5e8cea26dd9438c"
	I0229 19:08:23.257518   47515 cri.go:89] found id: ""
	I0229 19:08:23.257526   47515 logs.go:276] 1 containers: [3e058bfecc2b86d04151528b3bc3ea0c29858a4d4c2af94ff5e8cea26dd9438c]
	I0229 19:08:23.257568   47515 ssh_runner.go:195] Run: which crictl
	I0229 19:08:23.262794   47515 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:08:23.262858   47515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:08:23.314356   47515 cri.go:89] found id: "d8cab5559badab1abf69f6d38522d0231303c529f2b78b96d073ce6f1117da43"
	I0229 19:08:23.314379   47515 cri.go:89] found id: ""
	I0229 19:08:23.314389   47515 logs.go:276] 1 containers: [d8cab5559badab1abf69f6d38522d0231303c529f2b78b96d073ce6f1117da43]
	I0229 19:08:23.314433   47515 ssh_runner.go:195] Run: which crictl
	I0229 19:08:23.319774   47515 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:08:23.319828   47515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:08:23.363724   47515 cri.go:89] found id: "2c68222b7809e6dde0a01964d3dcac0dd1789c074f0b42457a8d52ddf09a616a"
	I0229 19:08:23.363746   47515 cri.go:89] found id: ""
	I0229 19:08:23.363753   47515 logs.go:276] 1 containers: [2c68222b7809e6dde0a01964d3dcac0dd1789c074f0b42457a8d52ddf09a616a]
	I0229 19:08:23.363798   47515 ssh_runner.go:195] Run: which crictl
	I0229 19:08:23.368994   47515 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:08:23.369044   47515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:08:23.410298   47515 cri.go:89] found id: "ecdd7783c174633c8e5cbf63bbf2bd9a803b88800ec269f503d1974990869365"
	I0229 19:08:23.410317   47515 cri.go:89] found id: ""
	I0229 19:08:23.410323   47515 logs.go:276] 1 containers: [ecdd7783c174633c8e5cbf63bbf2bd9a803b88800ec269f503d1974990869365]
	I0229 19:08:23.410375   47515 ssh_runner.go:195] Run: which crictl
	I0229 19:08:23.416866   47515 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:08:23.416941   47515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:08:23.460286   47515 cri.go:89] found id: "9661e52ccd784fecb3c7d46bee0dd7f089a63bf8729d5990840d79192dc01b35"
	I0229 19:08:23.460313   47515 cri.go:89] found id: ""
	I0229 19:08:23.460323   47515 logs.go:276] 1 containers: [9661e52ccd784fecb3c7d46bee0dd7f089a63bf8729d5990840d79192dc01b35]
	I0229 19:08:23.460378   47515 ssh_runner.go:195] Run: which crictl
	I0229 19:08:23.467279   47515 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:08:23.467343   47515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:08:23.505758   47515 cri.go:89] found id: ""
	I0229 19:08:23.505790   47515 logs.go:276] 0 containers: []
	W0229 19:08:23.505801   47515 logs.go:278] No container was found matching "kindnet"
	I0229 19:08:23.505808   47515 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 19:08:23.505870   47515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 19:08:23.545547   47515 cri.go:89] found id: "c77d304aa104bed3e19645c4702145c26a9b0c8580d21f085e5eac06fa73ca2c"
	I0229 19:08:23.545573   47515 cri.go:89] found id: ""
	I0229 19:08:23.545581   47515 logs.go:276] 1 containers: [c77d304aa104bed3e19645c4702145c26a9b0c8580d21f085e5eac06fa73ca2c]
	I0229 19:08:23.545642   47515 ssh_runner.go:195] Run: which crictl
	I0229 19:08:23.550632   47515 logs.go:123] Gathering logs for kube-apiserver [730a369e2636f53e3085f418ca1fe59278c5988e5639018d26088db3e0d29a9a] ...
	I0229 19:08:23.550652   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 730a369e2636f53e3085f418ca1fe59278c5988e5639018d26088db3e0d29a9a"
	I0229 19:08:23.613033   47515 logs.go:123] Gathering logs for etcd [3e058bfecc2b86d04151528b3bc3ea0c29858a4d4c2af94ff5e8cea26dd9438c] ...
	I0229 19:08:23.613072   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e058bfecc2b86d04151528b3bc3ea0c29858a4d4c2af94ff5e8cea26dd9438c"
	I0229 19:08:23.664593   47515 logs.go:123] Gathering logs for kube-scheduler [2c68222b7809e6dde0a01964d3dcac0dd1789c074f0b42457a8d52ddf09a616a] ...
	I0229 19:08:23.664623   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c68222b7809e6dde0a01964d3dcac0dd1789c074f0b42457a8d52ddf09a616a"
	I0229 19:08:23.723282   47515 logs.go:123] Gathering logs for storage-provisioner [c77d304aa104bed3e19645c4702145c26a9b0c8580d21f085e5eac06fa73ca2c] ...
	I0229 19:08:23.723311   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c77d304aa104bed3e19645c4702145c26a9b0c8580d21f085e5eac06fa73ca2c"
	I0229 19:08:23.764629   47515 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:08:23.764655   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:08:24.254240   47515 logs.go:123] Gathering logs for container status ...
	I0229 19:08:24.254271   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:08:24.321241   47515 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:08:24.321267   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 19:08:24.472841   47515 logs.go:123] Gathering logs for dmesg ...
	I0229 19:08:24.472870   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:08:24.492953   47515 logs.go:123] Gathering logs for kube-apiserver [6edf3acff7dee1fec0aa396e614c8d822a2a7e2074e2da8863c3427cda994799] ...
	I0229 19:08:24.492987   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6edf3acff7dee1fec0aa396e614c8d822a2a7e2074e2da8863c3427cda994799"
	I0229 19:08:24.603910   47515 logs.go:123] Gathering logs for coredns [d8cab5559badab1abf69f6d38522d0231303c529f2b78b96d073ce6f1117da43] ...
	I0229 19:08:24.603952   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8cab5559badab1abf69f6d38522d0231303c529f2b78b96d073ce6f1117da43"
	I0229 19:08:24.651625   47515 logs.go:123] Gathering logs for kube-proxy [ecdd7783c174633c8e5cbf63bbf2bd9a803b88800ec269f503d1974990869365] ...
	I0229 19:08:24.651653   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecdd7783c174633c8e5cbf63bbf2bd9a803b88800ec269f503d1974990869365"
	I0229 19:08:24.693482   47515 logs.go:123] Gathering logs for kube-controller-manager [9661e52ccd784fecb3c7d46bee0dd7f089a63bf8729d5990840d79192dc01b35] ...
	I0229 19:08:24.693508   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9661e52ccd784fecb3c7d46bee0dd7f089a63bf8729d5990840d79192dc01b35"
	I0229 19:08:24.746081   47515 logs.go:123] Gathering logs for kubelet ...
	I0229 19:08:24.746111   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:08:27.342960   47515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:08:27.361722   47515 api_server.go:72] duration metric: took 4m6.978456788s to wait for apiserver process to appear ...
	I0229 19:08:27.361756   47515 api_server.go:88] waiting for apiserver healthz status ...
	I0229 19:08:27.361795   47515 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:08:27.361850   47515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:08:27.404496   47515 cri.go:89] found id: "730a369e2636f53e3085f418ca1fe59278c5988e5639018d26088db3e0d29a9a"
	I0229 19:08:27.404525   47515 cri.go:89] found id: "6edf3acff7dee1fec0aa396e614c8d822a2a7e2074e2da8863c3427cda994799"
	I0229 19:08:27.404530   47515 cri.go:89] found id: ""
	I0229 19:08:27.404538   47515 logs.go:276] 2 containers: [730a369e2636f53e3085f418ca1fe59278c5988e5639018d26088db3e0d29a9a 6edf3acff7dee1fec0aa396e614c8d822a2a7e2074e2da8863c3427cda994799]
	I0229 19:08:27.404598   47515 ssh_runner.go:195] Run: which crictl
	I0229 19:08:27.409339   47515 ssh_runner.go:195] Run: which crictl
	I0229 19:08:27.413757   47515 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:08:27.413814   47515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:08:27.456993   47515 cri.go:89] found id: "3e058bfecc2b86d04151528b3bc3ea0c29858a4d4c2af94ff5e8cea26dd9438c"
	I0229 19:08:27.457020   47515 cri.go:89] found id: ""
	I0229 19:08:27.457029   47515 logs.go:276] 1 containers: [3e058bfecc2b86d04151528b3bc3ea0c29858a4d4c2af94ff5e8cea26dd9438c]
	I0229 19:08:27.457089   47515 ssh_runner.go:195] Run: which crictl
	I0229 19:08:27.462024   47515 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:08:27.462088   47515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:08:27.506509   47515 cri.go:89] found id: "d8cab5559badab1abf69f6d38522d0231303c529f2b78b96d073ce6f1117da43"
	I0229 19:08:27.506530   47515 cri.go:89] found id: ""
	I0229 19:08:27.506539   47515 logs.go:276] 1 containers: [d8cab5559badab1abf69f6d38522d0231303c529f2b78b96d073ce6f1117da43]
	I0229 19:08:27.506598   47515 ssh_runner.go:195] Run: which crictl
	I0229 19:08:27.511408   47515 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:08:27.511480   47515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:08:27.558522   47515 cri.go:89] found id: "2c68222b7809e6dde0a01964d3dcac0dd1789c074f0b42457a8d52ddf09a616a"
	I0229 19:08:27.558545   47515 cri.go:89] found id: ""
	I0229 19:08:27.558554   47515 logs.go:276] 1 containers: [2c68222b7809e6dde0a01964d3dcac0dd1789c074f0b42457a8d52ddf09a616a]
	I0229 19:08:27.558638   47515 ssh_runner.go:195] Run: which crictl
	I0229 19:08:27.566043   47515 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:08:27.566119   47515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:08:27.613465   47515 cri.go:89] found id: "ecdd7783c174633c8e5cbf63bbf2bd9a803b88800ec269f503d1974990869365"
	I0229 19:08:27.613486   47515 cri.go:89] found id: ""
	I0229 19:08:27.613495   47515 logs.go:276] 1 containers: [ecdd7783c174633c8e5cbf63bbf2bd9a803b88800ec269f503d1974990869365]
	I0229 19:08:27.613556   47515 ssh_runner.go:195] Run: which crictl
	I0229 19:08:27.618347   47515 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:08:27.618412   47515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:08:27.668486   47515 cri.go:89] found id: "9661e52ccd784fecb3c7d46bee0dd7f089a63bf8729d5990840d79192dc01b35"
	I0229 19:08:27.668510   47515 cri.go:89] found id: ""
	I0229 19:08:27.668519   47515 logs.go:276] 1 containers: [9661e52ccd784fecb3c7d46bee0dd7f089a63bf8729d5990840d79192dc01b35]
	I0229 19:08:27.668572   47515 ssh_runner.go:195] Run: which crictl
	I0229 19:08:27.673416   47515 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:08:27.673476   47515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:08:27.718790   47515 cri.go:89] found id: ""
	I0229 19:08:27.718813   47515 logs.go:276] 0 containers: []
	W0229 19:08:27.718824   47515 logs.go:278] No container was found matching "kindnet"
	I0229 19:08:27.718831   47515 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 19:08:27.718888   47515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 19:08:27.766906   47515 cri.go:89] found id: "c77d304aa104bed3e19645c4702145c26a9b0c8580d21f085e5eac06fa73ca2c"
	I0229 19:08:27.766988   47515 cri.go:89] found id: ""
	I0229 19:08:27.767005   47515 logs.go:276] 1 containers: [c77d304aa104bed3e19645c4702145c26a9b0c8580d21f085e5eac06fa73ca2c]
	I0229 19:08:27.767082   47515 ssh_runner.go:195] Run: which crictl
	I0229 19:08:27.772046   47515 logs.go:123] Gathering logs for dmesg ...
	I0229 19:08:27.772073   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:08:27.789085   47515 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:08:27.789118   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 19:08:27.915599   47515 logs.go:123] Gathering logs for kube-apiserver [6edf3acff7dee1fec0aa396e614c8d822a2a7e2074e2da8863c3427cda994799] ...
	I0229 19:08:27.915629   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6edf3acff7dee1fec0aa396e614c8d822a2a7e2074e2da8863c3427cda994799"
	I0229 19:08:28.022219   47515 logs.go:123] Gathering logs for coredns [d8cab5559badab1abf69f6d38522d0231303c529f2b78b96d073ce6f1117da43] ...
	I0229 19:08:28.022253   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8cab5559badab1abf69f6d38522d0231303c529f2b78b96d073ce6f1117da43"
	I0229 19:08:28.068916   47515 logs.go:123] Gathering logs for kube-proxy [ecdd7783c174633c8e5cbf63bbf2bd9a803b88800ec269f503d1974990869365] ...
	I0229 19:08:28.068942   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecdd7783c174633c8e5cbf63bbf2bd9a803b88800ec269f503d1974990869365"
	I0229 19:08:28.116119   47515 logs.go:123] Gathering logs for storage-provisioner [c77d304aa104bed3e19645c4702145c26a9b0c8580d21f085e5eac06fa73ca2c] ...
	I0229 19:08:28.116145   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c77d304aa104bed3e19645c4702145c26a9b0c8580d21f085e5eac06fa73ca2c"
	I0229 19:08:28.158177   47515 logs.go:123] Gathering logs for kubelet ...
	I0229 19:08:28.158206   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:08:28.256419   47515 logs.go:123] Gathering logs for etcd [3e058bfecc2b86d04151528b3bc3ea0c29858a4d4c2af94ff5e8cea26dd9438c] ...
	I0229 19:08:28.256452   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e058bfecc2b86d04151528b3bc3ea0c29858a4d4c2af94ff5e8cea26dd9438c"
	I0229 19:08:28.310964   47515 logs.go:123] Gathering logs for kube-scheduler [2c68222b7809e6dde0a01964d3dcac0dd1789c074f0b42457a8d52ddf09a616a] ...
	I0229 19:08:28.310995   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c68222b7809e6dde0a01964d3dcac0dd1789c074f0b42457a8d52ddf09a616a"
	I0229 19:08:28.366330   47515 logs.go:123] Gathering logs for kube-controller-manager [9661e52ccd784fecb3c7d46bee0dd7f089a63bf8729d5990840d79192dc01b35] ...
	I0229 19:08:28.366361   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9661e52ccd784fecb3c7d46bee0dd7f089a63bf8729d5990840d79192dc01b35"
	I0229 19:08:28.432543   47515 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:08:28.432577   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:08:28.839513   47515 logs.go:123] Gathering logs for container status ...
	I0229 19:08:28.839550   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:08:28.889908   47515 logs.go:123] Gathering logs for kube-apiserver [730a369e2636f53e3085f418ca1fe59278c5988e5639018d26088db3e0d29a9a] ...
	I0229 19:08:28.889935   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 730a369e2636f53e3085f418ca1fe59278c5988e5639018d26088db3e0d29a9a"
	I0229 19:08:31.447297   47515 api_server.go:253] Checking apiserver healthz at https://192.168.50.72:8443/healthz ...
	I0229 19:08:31.456672   47515 api_server.go:279] https://192.168.50.72:8443/healthz returned 200:
	ok
	I0229 19:08:31.457930   47515 api_server.go:141] control plane version: v1.29.0-rc.2
	I0229 19:08:31.457948   47515 api_server.go:131] duration metric: took 4.09618563s to wait for apiserver health ...
	I0229 19:08:31.457955   47515 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 19:08:31.457974   47515 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 19:08:31.458020   47515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 19:08:31.507399   47515 cri.go:89] found id: "730a369e2636f53e3085f418ca1fe59278c5988e5639018d26088db3e0d29a9a"
	I0229 19:08:31.507419   47515 cri.go:89] found id: "6edf3acff7dee1fec0aa396e614c8d822a2a7e2074e2da8863c3427cda994799"
	I0229 19:08:31.507424   47515 cri.go:89] found id: ""
	I0229 19:08:31.507433   47515 logs.go:276] 2 containers: [730a369e2636f53e3085f418ca1fe59278c5988e5639018d26088db3e0d29a9a 6edf3acff7dee1fec0aa396e614c8d822a2a7e2074e2da8863c3427cda994799]
	I0229 19:08:31.507493   47515 ssh_runner.go:195] Run: which crictl
	I0229 19:08:31.512606   47515 ssh_runner.go:195] Run: which crictl
	I0229 19:08:31.516990   47515 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 19:08:31.517059   47515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 19:08:31.558856   47515 cri.go:89] found id: "3e058bfecc2b86d04151528b3bc3ea0c29858a4d4c2af94ff5e8cea26dd9438c"
	I0229 19:08:31.558878   47515 cri.go:89] found id: ""
	I0229 19:08:31.558886   47515 logs.go:276] 1 containers: [3e058bfecc2b86d04151528b3bc3ea0c29858a4d4c2af94ff5e8cea26dd9438c]
	I0229 19:08:31.558943   47515 ssh_runner.go:195] Run: which crictl
	I0229 19:08:31.564106   47515 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 19:08:31.564173   47515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 19:08:31.607870   47515 cri.go:89] found id: "d8cab5559badab1abf69f6d38522d0231303c529f2b78b96d073ce6f1117da43"
	I0229 19:08:31.607891   47515 cri.go:89] found id: ""
	I0229 19:08:31.607901   47515 logs.go:276] 1 containers: [d8cab5559badab1abf69f6d38522d0231303c529f2b78b96d073ce6f1117da43]
	I0229 19:08:31.607963   47515 ssh_runner.go:195] Run: which crictl
	I0229 19:08:31.612655   47515 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 19:08:31.612706   47515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 19:08:31.653422   47515 cri.go:89] found id: "2c68222b7809e6dde0a01964d3dcac0dd1789c074f0b42457a8d52ddf09a616a"
	I0229 19:08:31.653442   47515 cri.go:89] found id: ""
	I0229 19:08:31.653455   47515 logs.go:276] 1 containers: [2c68222b7809e6dde0a01964d3dcac0dd1789c074f0b42457a8d52ddf09a616a]
	I0229 19:08:31.653516   47515 ssh_runner.go:195] Run: which crictl
	I0229 19:08:31.659010   47515 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 19:08:31.659086   47515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 19:08:31.705187   47515 cri.go:89] found id: "ecdd7783c174633c8e5cbf63bbf2bd9a803b88800ec269f503d1974990869365"
	I0229 19:08:31.705210   47515 cri.go:89] found id: ""
	I0229 19:08:31.705219   47515 logs.go:276] 1 containers: [ecdd7783c174633c8e5cbf63bbf2bd9a803b88800ec269f503d1974990869365]
	I0229 19:08:31.705333   47515 ssh_runner.go:195] Run: which crictl
	I0229 19:08:31.710080   47515 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 19:08:31.710130   47515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 19:08:31.752967   47515 cri.go:89] found id: "9661e52ccd784fecb3c7d46bee0dd7f089a63bf8729d5990840d79192dc01b35"
	I0229 19:08:31.752991   47515 cri.go:89] found id: ""
	I0229 19:08:31.753000   47515 logs.go:276] 1 containers: [9661e52ccd784fecb3c7d46bee0dd7f089a63bf8729d5990840d79192dc01b35]
	I0229 19:08:31.753061   47515 ssh_runner.go:195] Run: which crictl
	I0229 19:08:31.757915   47515 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 19:08:31.757983   47515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 19:08:31.798767   47515 cri.go:89] found id: ""
	I0229 19:08:31.798794   47515 logs.go:276] 0 containers: []
	W0229 19:08:31.798804   47515 logs.go:278] No container was found matching "kindnet"
	I0229 19:08:31.798812   47515 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 19:08:31.798872   47515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 19:08:31.841051   47515 cri.go:89] found id: "c77d304aa104bed3e19645c4702145c26a9b0c8580d21f085e5eac06fa73ca2c"
	I0229 19:08:31.841071   47515 cri.go:89] found id: ""
	I0229 19:08:31.841078   47515 logs.go:276] 1 containers: [c77d304aa104bed3e19645c4702145c26a9b0c8580d21f085e5eac06fa73ca2c]
	I0229 19:08:31.841133   47515 ssh_runner.go:195] Run: which crictl
	I0229 19:08:31.845698   47515 logs.go:123] Gathering logs for storage-provisioner [c77d304aa104bed3e19645c4702145c26a9b0c8580d21f085e5eac06fa73ca2c] ...
	I0229 19:08:31.845732   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c77d304aa104bed3e19645c4702145c26a9b0c8580d21f085e5eac06fa73ca2c"
	I0229 19:08:31.887190   47515 logs.go:123] Gathering logs for CRI-O ...
	I0229 19:08:31.887218   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 19:08:32.264861   47515 logs.go:123] Gathering logs for kubelet ...
	I0229 19:08:32.264892   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:08:32.367323   47515 logs.go:123] Gathering logs for kube-apiserver [730a369e2636f53e3085f418ca1fe59278c5988e5639018d26088db3e0d29a9a] ...
	I0229 19:08:32.367364   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 730a369e2636f53e3085f418ca1fe59278c5988e5639018d26088db3e0d29a9a"
	I0229 19:08:32.416687   47515 logs.go:123] Gathering logs for coredns [d8cab5559badab1abf69f6d38522d0231303c529f2b78b96d073ce6f1117da43] ...
	I0229 19:08:32.416714   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8cab5559badab1abf69f6d38522d0231303c529f2b78b96d073ce6f1117da43"
	I0229 19:08:32.458459   47515 logs.go:123] Gathering logs for etcd [3e058bfecc2b86d04151528b3bc3ea0c29858a4d4c2af94ff5e8cea26dd9438c] ...
	I0229 19:08:32.458486   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e058bfecc2b86d04151528b3bc3ea0c29858a4d4c2af94ff5e8cea26dd9438c"
	I0229 19:08:32.502450   47515 logs.go:123] Gathering logs for kube-scheduler [2c68222b7809e6dde0a01964d3dcac0dd1789c074f0b42457a8d52ddf09a616a] ...
	I0229 19:08:32.502476   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c68222b7809e6dde0a01964d3dcac0dd1789c074f0b42457a8d52ddf09a616a"
	I0229 19:08:32.555285   47515 logs.go:123] Gathering logs for kube-proxy [ecdd7783c174633c8e5cbf63bbf2bd9a803b88800ec269f503d1974990869365] ...
	I0229 19:08:32.555311   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecdd7783c174633c8e5cbf63bbf2bd9a803b88800ec269f503d1974990869365"
	I0229 19:08:32.602273   47515 logs.go:123] Gathering logs for kube-controller-manager [9661e52ccd784fecb3c7d46bee0dd7f089a63bf8729d5990840d79192dc01b35] ...
	I0229 19:08:32.602303   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9661e52ccd784fecb3c7d46bee0dd7f089a63bf8729d5990840d79192dc01b35"
	I0229 19:08:32.655346   47515 logs.go:123] Gathering logs for container status ...
	I0229 19:08:32.655373   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:08:32.716233   47515 logs.go:123] Gathering logs for dmesg ...
	I0229 19:08:32.716262   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:08:32.733285   47515 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:08:32.733311   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 19:08:32.854014   47515 logs.go:123] Gathering logs for kube-apiserver [6edf3acff7dee1fec0aa396e614c8d822a2a7e2074e2da8863c3427cda994799] ...
	I0229 19:08:32.854038   47515 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6edf3acff7dee1fec0aa396e614c8d822a2a7e2074e2da8863c3427cda994799"
	I0229 19:08:35.460690   47515 system_pods.go:59] 8 kube-system pods found
	I0229 19:08:35.460717   47515 system_pods.go:61] "coredns-76f75df574-9z6k5" [818ddb56-c41b-4aae-8490-a9559498eecb] Running
	I0229 19:08:35.460721   47515 system_pods.go:61] "etcd-no-preload-247197" [c6da002d-16f1-4063-9614-f07d5ca6fde8] Running
	I0229 19:08:35.460725   47515 system_pods.go:61] "kube-apiserver-no-preload-247197" [4b330572-426b-414f-bc3f-0b6936d52831] Running
	I0229 19:08:35.460728   47515 system_pods.go:61] "kube-controller-manager-no-preload-247197" [e587f362-08db-4542-9a20-c5422f6607cc] Running
	I0229 19:08:35.460731   47515 system_pods.go:61] "kube-proxy-vvkjv" [b5b911d8-c127-4008-a279-5f1cac593457] Running
	I0229 19:08:35.460734   47515 system_pods.go:61] "kube-scheduler-no-preload-247197" [0063db5e-a134-4cd4-b3d9-90b771e141c4] Running
	I0229 19:08:35.460740   47515 system_pods.go:61] "metrics-server-57f55c9bc5-nj5h7" [c53f2987-829e-4bea-8075-16af3a59249f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 19:08:35.460743   47515 system_pods.go:61] "storage-provisioner" [3c361786-e6d8-4cb4-81c3-387677a3bb05] Running
	I0229 19:08:35.460750   47515 system_pods.go:74] duration metric: took 4.002789673s to wait for pod list to return data ...
	I0229 19:08:35.460757   47515 default_sa.go:34] waiting for default service account to be created ...
	I0229 19:08:35.463218   47515 default_sa.go:45] found service account: "default"
	I0229 19:08:35.463248   47515 default_sa.go:55] duration metric: took 2.483102ms for default service account to be created ...
	I0229 19:08:35.463261   47515 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 19:08:35.469351   47515 system_pods.go:86] 8 kube-system pods found
	I0229 19:08:35.469372   47515 system_pods.go:89] "coredns-76f75df574-9z6k5" [818ddb56-c41b-4aae-8490-a9559498eecb] Running
	I0229 19:08:35.469377   47515 system_pods.go:89] "etcd-no-preload-247197" [c6da002d-16f1-4063-9614-f07d5ca6fde8] Running
	I0229 19:08:35.469383   47515 system_pods.go:89] "kube-apiserver-no-preload-247197" [4b330572-426b-414f-bc3f-0b6936d52831] Running
	I0229 19:08:35.469388   47515 system_pods.go:89] "kube-controller-manager-no-preload-247197" [e587f362-08db-4542-9a20-c5422f6607cc] Running
	I0229 19:08:35.469392   47515 system_pods.go:89] "kube-proxy-vvkjv" [b5b911d8-c127-4008-a279-5f1cac593457] Running
	I0229 19:08:35.469396   47515 system_pods.go:89] "kube-scheduler-no-preload-247197" [0063db5e-a134-4cd4-b3d9-90b771e141c4] Running
	I0229 19:08:35.469402   47515 system_pods.go:89] "metrics-server-57f55c9bc5-nj5h7" [c53f2987-829e-4bea-8075-16af3a59249f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 19:08:35.469407   47515 system_pods.go:89] "storage-provisioner" [3c361786-e6d8-4cb4-81c3-387677a3bb05] Running
	I0229 19:08:35.469415   47515 system_pods.go:126] duration metric: took 6.148455ms to wait for k8s-apps to be running ...
	I0229 19:08:35.469422   47515 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 19:08:35.469464   47515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 19:08:35.487453   47515 system_svc.go:56] duration metric: took 18.016016ms WaitForService to wait for kubelet.
	I0229 19:08:35.487485   47515 kubeadm.go:581] duration metric: took 4m15.104218747s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 19:08:35.487509   47515 node_conditions.go:102] verifying NodePressure condition ...
	I0229 19:08:35.490828   47515 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 19:08:35.490844   47515 node_conditions.go:123] node cpu capacity is 2
	I0229 19:08:35.490854   47515 node_conditions.go:105] duration metric: took 3.34147ms to run NodePressure ...
	I0229 19:08:35.490864   47515 start.go:228] waiting for startup goroutines ...
	I0229 19:08:35.490871   47515 start.go:233] waiting for cluster config update ...
	I0229 19:08:35.490881   47515 start.go:242] writing updated cluster config ...
	I0229 19:08:35.491140   47515 ssh_runner.go:195] Run: rm -f paused
	I0229 19:08:35.539922   47515 start.go:601] kubectl: 1.29.2, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0229 19:08:35.542171   47515 out.go:177] * Done! kubectl is now configured to use "no-preload-247197" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Feb 29 19:16:56 old-k8s-version-631080 crio[643]: time="2024-02-29 19:16:56.169005151Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709234216168972913,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:105088,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=32c15d83-32b8-4806-b0f1-c1ede2c6ef08 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 19:16:56 old-k8s-version-631080 crio[643]: time="2024-02-29 19:16:56.169720943Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=92bfd0dd-514d-412e-8c01-b82dd6948dc1 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:16:56 old-k8s-version-631080 crio[643]: time="2024-02-29 19:16:56.169805704Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=92bfd0dd-514d-412e-8c01-b82dd6948dc1 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:16:56 old-k8s-version-631080 crio[643]: time="2024-02-29 19:16:56.169846151Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=92bfd0dd-514d-412e-8c01-b82dd6948dc1 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:16:56 old-k8s-version-631080 crio[643]: time="2024-02-29 19:16:56.207143677Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=329d5224-3dbb-4b17-96dc-7fbb48047792 name=/runtime.v1.RuntimeService/Version
	Feb 29 19:16:56 old-k8s-version-631080 crio[643]: time="2024-02-29 19:16:56.207238251Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=329d5224-3dbb-4b17-96dc-7fbb48047792 name=/runtime.v1.RuntimeService/Version
	Feb 29 19:16:56 old-k8s-version-631080 crio[643]: time="2024-02-29 19:16:56.208669852Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6d7ad3ef-6c80-4227-a44a-5163a1a1dc73 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 19:16:56 old-k8s-version-631080 crio[643]: time="2024-02-29 19:16:56.209138679Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709234216209110099,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:105088,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6d7ad3ef-6c80-4227-a44a-5163a1a1dc73 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 19:16:56 old-k8s-version-631080 crio[643]: time="2024-02-29 19:16:56.209816536Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=99fcf866-e37e-4d59-8127-2aeae628c1ae name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:16:56 old-k8s-version-631080 crio[643]: time="2024-02-29 19:16:56.209893299Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=99fcf866-e37e-4d59-8127-2aeae628c1ae name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:16:56 old-k8s-version-631080 crio[643]: time="2024-02-29 19:16:56.209934093Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=99fcf866-e37e-4d59-8127-2aeae628c1ae name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:16:56 old-k8s-version-631080 crio[643]: time="2024-02-29 19:16:56.247055906Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9471e184-ed24-4f3c-83ca-f0c9247590d9 name=/runtime.v1.RuntimeService/Version
	Feb 29 19:16:56 old-k8s-version-631080 crio[643]: time="2024-02-29 19:16:56.247153537Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9471e184-ed24-4f3c-83ca-f0c9247590d9 name=/runtime.v1.RuntimeService/Version
	Feb 29 19:16:56 old-k8s-version-631080 crio[643]: time="2024-02-29 19:16:56.248595874Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4f023852-2ccf-4cef-8fd5-11c0c74580dc name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 19:16:56 old-k8s-version-631080 crio[643]: time="2024-02-29 19:16:56.248995173Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709234216248967600,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:105088,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4f023852-2ccf-4cef-8fd5-11c0c74580dc name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 19:16:56 old-k8s-version-631080 crio[643]: time="2024-02-29 19:16:56.249680088Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4391bc5e-6935-4b71-a32a-a5ef35839594 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:16:56 old-k8s-version-631080 crio[643]: time="2024-02-29 19:16:56.249733844Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4391bc5e-6935-4b71-a32a-a5ef35839594 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:16:56 old-k8s-version-631080 crio[643]: time="2024-02-29 19:16:56.249767837Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=4391bc5e-6935-4b71-a32a-a5ef35839594 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:16:56 old-k8s-version-631080 crio[643]: time="2024-02-29 19:16:56.288101262Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=20cd3f06-1005-4706-a7ab-b56c60cf74ca name=/runtime.v1.RuntimeService/Version
	Feb 29 19:16:56 old-k8s-version-631080 crio[643]: time="2024-02-29 19:16:56.288186428Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=20cd3f06-1005-4706-a7ab-b56c60cf74ca name=/runtime.v1.RuntimeService/Version
	Feb 29 19:16:56 old-k8s-version-631080 crio[643]: time="2024-02-29 19:16:56.289636393Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=80ae9d47-727e-4733-856d-6acbeda6640c name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 19:16:56 old-k8s-version-631080 crio[643]: time="2024-02-29 19:16:56.290034080Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709234216290007769,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:105088,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=80ae9d47-727e-4733-856d-6acbeda6640c name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 19:16:56 old-k8s-version-631080 crio[643]: time="2024-02-29 19:16:56.290562170Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5ed245be-7b85-4141-bf0f-004de710cb2b name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:16:56 old-k8s-version-631080 crio[643]: time="2024-02-29 19:16:56.290666431Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5ed245be-7b85-4141-bf0f-004de710cb2b name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:16:56 old-k8s-version-631080 crio[643]: time="2024-02-29 19:16:56.290730083Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=5ed245be-7b85-4141-bf0f-004de710cb2b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Feb29 18:57] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053084] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.047040] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.651606] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.237160] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.709570] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000015] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.273436] systemd-fstab-generator[564]: Ignoring "noauto" option for root device
	[  +0.071452] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.078075] systemd-fstab-generator[576]: Ignoring "noauto" option for root device
	[  +0.234498] systemd-fstab-generator[590]: Ignoring "noauto" option for root device
	[  +0.167610] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.309321] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[Feb29 18:58] systemd-fstab-generator[945]: Ignoring "noauto" option for root device
	[  +0.062684] kauditd_printk_skb: 130 callbacks suppressed
	[Feb29 19:02] systemd-fstab-generator[8056]: Ignoring "noauto" option for root device
	[  +0.069082] kauditd_printk_skb: 21 callbacks suppressed
	[Feb29 19:04] systemd-fstab-generator[9767]: Ignoring "noauto" option for root device
	[  +0.062408] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 19:16:56 up 19 min,  0 users,  load average: 0.00, 0.04, 0.10
	Linux old-k8s-version-631080 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Feb 29 19:16:54 old-k8s-version-631080 kubelet[20651]: F0229 19:16:54.614388   20651 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 29 19:16:54 old-k8s-version-631080 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 29 19:16:54 old-k8s-version-631080 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 29 19:16:55 old-k8s-version-631080 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1020.
	Feb 29 19:16:55 old-k8s-version-631080 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 29 19:16:55 old-k8s-version-631080 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 29 19:16:55 old-k8s-version-631080 kubelet[20670]: I0229 19:16:55.373345   20670 server.go:410] Version: v1.16.0
	Feb 29 19:16:55 old-k8s-version-631080 kubelet[20670]: I0229 19:16:55.373639   20670 plugins.go:100] No cloud provider specified.
	Feb 29 19:16:55 old-k8s-version-631080 kubelet[20670]: I0229 19:16:55.373652   20670 server.go:773] Client rotation is on, will bootstrap in background
	Feb 29 19:16:55 old-k8s-version-631080 kubelet[20670]: I0229 19:16:55.376560   20670 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 29 19:16:55 old-k8s-version-631080 kubelet[20670]: W0229 19:16:55.377960   20670 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 29 19:16:55 old-k8s-version-631080 kubelet[20670]: F0229 19:16:55.378053   20670 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 29 19:16:55 old-k8s-version-631080 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 29 19:16:55 old-k8s-version-631080 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 29 19:16:56 old-k8s-version-631080 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1021.
	Feb 29 19:16:56 old-k8s-version-631080 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 29 19:16:56 old-k8s-version-631080 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 29 19:16:56 old-k8s-version-631080 kubelet[20695]: I0229 19:16:56.136428   20695 server.go:410] Version: v1.16.0
	Feb 29 19:16:56 old-k8s-version-631080 kubelet[20695]: I0229 19:16:56.136704   20695 plugins.go:100] No cloud provider specified.
	Feb 29 19:16:56 old-k8s-version-631080 kubelet[20695]: I0229 19:16:56.136718   20695 server.go:773] Client rotation is on, will bootstrap in background
	Feb 29 19:16:56 old-k8s-version-631080 kubelet[20695]: I0229 19:16:56.138911   20695 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 29 19:16:56 old-k8s-version-631080 kubelet[20695]: W0229 19:16:56.139847   20695 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 29 19:16:56 old-k8s-version-631080 kubelet[20695]: F0229 19:16:56.139900   20695 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 29 19:16:56 old-k8s-version-631080 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 29 19:16:56 old-k8s-version-631080 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-631080 -n old-k8s-version-631080
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-631080 -n old-k8s-version-631080: exit status 2 (236.761113ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-631080" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (104.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (114.68s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-991128 -n embed-certs-991128
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-02-29 19:18:11.475867073 +0000 UTC m=+6048.970850859
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-991128 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-991128 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (8.458µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-991128 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-991128 -n embed-certs-991128
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-991128 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-991128 logs -n 25: (1.232858274s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p                                                     | default-k8s-diff-port-153528 | jenkins | v1.32.0 | 29 Feb 24 18:49 UTC | 29 Feb 24 18:50 UTC |
	|         | default-k8s-diff-port-153528                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-247197             | no-preload-247197            | jenkins | v1.32.0 | 29 Feb 24 18:49 UTC | 29 Feb 24 18:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-247197                                   | no-preload-247197            | jenkins | v1.32.0 | 29 Feb 24 18:49 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-991128            | embed-certs-991128           | jenkins | v1.32.0 | 29 Feb 24 18:50 UTC | 29 Feb 24 18:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-991128                                  | embed-certs-991128           | jenkins | v1.32.0 | 29 Feb 24 18:50 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-153528  | default-k8s-diff-port-153528 | jenkins | v1.32.0 | 29 Feb 24 18:51 UTC | 29 Feb 24 18:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-153528 | jenkins | v1.32.0 | 29 Feb 24 18:51 UTC |                     |
	|         | default-k8s-diff-port-153528                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-631080        | old-k8s-version-631080       | jenkins | v1.32.0 | 29 Feb 24 18:51 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-247197                  | no-preload-247197            | jenkins | v1.32.0 | 29 Feb 24 18:52 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-991128                 | embed-certs-991128           | jenkins | v1.32.0 | 29 Feb 24 18:52 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-247197                                   | no-preload-247197            | jenkins | v1.32.0 | 29 Feb 24 18:52 UTC | 29 Feb 24 19:08 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| start   | -p embed-certs-991128                                  | embed-certs-991128           | jenkins | v1.32.0 | 29 Feb 24 18:52 UTC | 29 Feb 24 19:07 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-631080                              | old-k8s-version-631080       | jenkins | v1.32.0 | 29 Feb 24 18:53 UTC | 29 Feb 24 18:53 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-631080             | old-k8s-version-631080       | jenkins | v1.32.0 | 29 Feb 24 18:53 UTC | 29 Feb 24 18:53 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-631080                              | old-k8s-version-631080       | jenkins | v1.32.0 | 29 Feb 24 18:53 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-153528       | default-k8s-diff-port-153528 | jenkins | v1.32.0 | 29 Feb 24 18:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-153528 | jenkins | v1.32.0 | 29 Feb 24 18:53 UTC | 29 Feb 24 19:07 UTC |
	|         | default-k8s-diff-port-153528                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-631080                              | old-k8s-version-631080       | jenkins | v1.32.0 | 29 Feb 24 19:16 UTC | 29 Feb 24 19:16 UTC |
	| start   | -p newest-cni-130594 --memory=2200 --alsologtostderr   | newest-cni-130594            | jenkins | v1.32.0 | 29 Feb 24 19:16 UTC | 29 Feb 24 19:17 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p no-preload-247197                                   | no-preload-247197            | jenkins | v1.32.0 | 29 Feb 24 19:17 UTC | 29 Feb 24 19:17 UTC |
	| start   | -p auto-587185 --memory=3072                           | auto-587185                  | jenkins | v1.32.0 | 29 Feb 24 19:17 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-130594             | newest-cni-130594            | jenkins | v1.32.0 | 29 Feb 24 19:17 UTC | 29 Feb 24 19:17 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-130594                                   | newest-cni-130594            | jenkins | v1.32.0 | 29 Feb 24 19:18 UTC | 29 Feb 24 19:18 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-130594                  | newest-cni-130594            | jenkins | v1.32.0 | 29 Feb 24 19:18 UTC | 29 Feb 24 19:18 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-130594 --memory=2200 --alsologtostderr   | newest-cni-130594            | jenkins | v1.32.0 | 29 Feb 24 19:18 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 19:18:10
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 19:18:10.371527   54126 out.go:291] Setting OutFile to fd 1 ...
	I0229 19:18:10.371625   54126 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 19:18:10.371635   54126 out.go:304] Setting ErrFile to fd 2...
	I0229 19:18:10.371641   54126 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 19:18:10.371833   54126 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6428/.minikube/bin
	I0229 19:18:10.372367   54126 out.go:298] Setting JSON to false
	I0229 19:18:10.373220   54126 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7235,"bootTime":1709227056,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 19:18:10.373291   54126 start.go:139] virtualization: kvm guest
	I0229 19:18:10.375538   54126 out.go:177] * [newest-cni-130594] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 19:18:10.376932   54126 out.go:177]   - MINIKUBE_LOCATION=18259
	I0229 19:18:10.376883   54126 notify.go:220] Checking for updates...
	I0229 19:18:10.378216   54126 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 19:18:10.379728   54126 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18259-6428/kubeconfig
	I0229 19:18:10.381119   54126 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6428/.minikube
	I0229 19:18:10.382257   54126 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 19:18:10.383414   54126 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 19:18:10.384938   54126 config.go:182] Loaded profile config "newest-cni-130594": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0229 19:18:10.385283   54126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:18:10.385323   54126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:18:10.399773   54126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44075
	I0229 19:18:10.400111   54126 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:18:10.400646   54126 main.go:141] libmachine: Using API Version  1
	I0229 19:18:10.400665   54126 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:18:10.401037   54126 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:18:10.401235   54126 main.go:141] libmachine: (newest-cni-130594) Calling .DriverName
	I0229 19:18:10.401460   54126 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 19:18:10.401859   54126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:18:10.401907   54126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:18:10.416172   54126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34537
	I0229 19:18:10.416668   54126 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:18:10.417107   54126 main.go:141] libmachine: Using API Version  1
	I0229 19:18:10.417135   54126 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:18:10.417417   54126 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:18:10.417594   54126 main.go:141] libmachine: (newest-cni-130594) Calling .DriverName
	I0229 19:18:10.455709   54126 out.go:177] * Using the kvm2 driver based on existing profile
	I0229 19:18:10.456927   54126 start.go:299] selected driver: kvm2
	I0229 19:18:10.456942   54126 start.go:903] validating driver "kvm2" against &{Name:newest-cni-130594 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-130594 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.67 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_
ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 19:18:10.457040   54126 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 19:18:10.457740   54126 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 19:18:10.457814   54126 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18259-6428/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 19:18:10.473826   54126 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 19:18:10.474426   54126 start_flags.go:950] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0229 19:18:10.474515   54126 cni.go:84] Creating CNI manager for ""
	I0229 19:18:10.474534   54126 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 19:18:10.474563   54126 start_flags.go:323] config:
	{Name:newest-cni-130594 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-130594 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.67 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> Expose
dPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 19:18:10.474764   54126 iso.go:125] acquiring lock: {Name:mk4b55cee5696fff78a1389276e4e011ad655e56 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 19:18:10.476616   54126 out.go:177] * Starting control plane node newest-cni-130594 in cluster newest-cni-130594
	
	
	==> CRI-O <==
	Feb 29 19:18:12 embed-certs-991128 crio[671]: time="2024-02-29 19:18:12.147197111Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709234292147172237,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=61b8db74-7e2c-45e4-8f8b-1bca0dc8add9 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 19:18:12 embed-certs-991128 crio[671]: time="2024-02-29 19:18:12.147996149Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fab537e0-f10d-4905-b9d0-44d726f2871a name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:18:12 embed-certs-991128 crio[671]: time="2024-02-29 19:18:12.148048995Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fab537e0-f10d-4905-b9d0-44d726f2871a name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:18:12 embed-certs-991128 crio[671]: time="2024-02-29 19:18:12.148219288Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6d4d0c25cc63929278e7e44eb4cd9bc93f376b8bb76a05b78a29d9d9b9794ada,PodSandboxId:d021343dc78c4c8fff740ae383784d90d75d3ca0eb97f4f9680d5d1d7496b029,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709233381501478855,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9ce642e-81dc-4dd7-be8e-3796e19f8f03,},Annotations:map[string]string{io.kubernetes.container.hash: 28dd27d7,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7220454898e1219d80ac33161ddc4116866d5cf1d1382767c1bf456cfa6b3c72,PodSandboxId:e8b69c01808092e60eb2934c57c1b4ab3db6198e2df112c64b87974d8dbadd2f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709233379574574830,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-nth8z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eeec9c32-9f61-4cb7-b1fb-3dd75c5af668,},Annotations:map[string]string{io.kubernetes.container.hash: 266168b7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3327a9756b71a79f82db55da77bd2297066ea76c84094764ff2fcbd04e4f528d,PodSandboxId:542b014e67e872c2082e9249b966712bb148b172e0a38ece70d5c85bb0f20f34,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709233379070088123,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5grst,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 355
24449-8c5a-440d-a45f-ce631ebff076,},Annotations:map[string]string{io.kubernetes.container.hash: ac0db45a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:795516eef7b679f1eefaf293eab6b1683326e3fcb3f4e5f848194e4a7ab1566e,PodSandboxId:4199ed14d97b0118203b50e45f45ab826ce09cf0cc4da0ef56dbee5cce4b9101,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709233359498073327,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-991128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f60d3a28ff8f8340730bf0057041fb20,},Annota
tions:map[string]string{io.kubernetes.container.hash: 13b0311e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1accc151694b40814505327a0743f46796dacf516f08043f58b1c42147e25fe,PodSandboxId:814a5a953c233b6d0febf2ff987abd74715833ed7cafd0554b1076e62af233c4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709233359444387154,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-991128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e22c8a948f076983154faaffa6d2b95,},Annotations:map[st
ring]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9099ab49263e57e41e6b7b28a87e810a951b495e3d0c879bf7ac045d0d2c2bd0,PodSandboxId:31761f95bbfbbe203a3cba92428b86af56068633459259fe1714dce8e1217961,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709233359451841969,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-991128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cea9e64667edc13c8ed77ee608a410bf,},Ann
otations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18f508cd43779fa9b03e93445f4d03abfe5f3e6291dc05cf116f16714d04ec96,PodSandboxId:8fd8c4a1941cadd559b51da7b96b95d27f98cfdf47952563226a91f64bb269df,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709233359440340624,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-991128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f9594481c9c21af2b85fe50da50c97f,},Annotations:map
[string]string{io.kubernetes.container.hash: 68d8cdba,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fab537e0-f10d-4905-b9d0-44d726f2871a name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:18:12 embed-certs-991128 crio[671]: time="2024-02-29 19:18:12.191402488Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5f3c344b-e372-4793-b387-f2facaaab9ca name=/runtime.v1.RuntimeService/Version
	Feb 29 19:18:12 embed-certs-991128 crio[671]: time="2024-02-29 19:18:12.191473134Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5f3c344b-e372-4793-b387-f2facaaab9ca name=/runtime.v1.RuntimeService/Version
	Feb 29 19:18:12 embed-certs-991128 crio[671]: time="2024-02-29 19:18:12.192847591Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0e87985b-29ff-4a96-868b-e3593a4577cf name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 19:18:12 embed-certs-991128 crio[671]: time="2024-02-29 19:18:12.193380303Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709234292193340408,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0e87985b-29ff-4a96-868b-e3593a4577cf name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 19:18:12 embed-certs-991128 crio[671]: time="2024-02-29 19:18:12.194124005Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3e376668-b37a-499d-a2fe-e9de70c16aab name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:18:12 embed-certs-991128 crio[671]: time="2024-02-29 19:18:12.194178907Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3e376668-b37a-499d-a2fe-e9de70c16aab name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:18:12 embed-certs-991128 crio[671]: time="2024-02-29 19:18:12.194396161Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6d4d0c25cc63929278e7e44eb4cd9bc93f376b8bb76a05b78a29d9d9b9794ada,PodSandboxId:d021343dc78c4c8fff740ae383784d90d75d3ca0eb97f4f9680d5d1d7496b029,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709233381501478855,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9ce642e-81dc-4dd7-be8e-3796e19f8f03,},Annotations:map[string]string{io.kubernetes.container.hash: 28dd27d7,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7220454898e1219d80ac33161ddc4116866d5cf1d1382767c1bf456cfa6b3c72,PodSandboxId:e8b69c01808092e60eb2934c57c1b4ab3db6198e2df112c64b87974d8dbadd2f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709233379574574830,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-nth8z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eeec9c32-9f61-4cb7-b1fb-3dd75c5af668,},Annotations:map[string]string{io.kubernetes.container.hash: 266168b7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3327a9756b71a79f82db55da77bd2297066ea76c84094764ff2fcbd04e4f528d,PodSandboxId:542b014e67e872c2082e9249b966712bb148b172e0a38ece70d5c85bb0f20f34,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709233379070088123,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5grst,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 355
24449-8c5a-440d-a45f-ce631ebff076,},Annotations:map[string]string{io.kubernetes.container.hash: ac0db45a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:795516eef7b679f1eefaf293eab6b1683326e3fcb3f4e5f848194e4a7ab1566e,PodSandboxId:4199ed14d97b0118203b50e45f45ab826ce09cf0cc4da0ef56dbee5cce4b9101,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709233359498073327,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-991128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f60d3a28ff8f8340730bf0057041fb20,},Annota
tions:map[string]string{io.kubernetes.container.hash: 13b0311e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1accc151694b40814505327a0743f46796dacf516f08043f58b1c42147e25fe,PodSandboxId:814a5a953c233b6d0febf2ff987abd74715833ed7cafd0554b1076e62af233c4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709233359444387154,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-991128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e22c8a948f076983154faaffa6d2b95,},Annotations:map[st
ring]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9099ab49263e57e41e6b7b28a87e810a951b495e3d0c879bf7ac045d0d2c2bd0,PodSandboxId:31761f95bbfbbe203a3cba92428b86af56068633459259fe1714dce8e1217961,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709233359451841969,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-991128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cea9e64667edc13c8ed77ee608a410bf,},Ann
otations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18f508cd43779fa9b03e93445f4d03abfe5f3e6291dc05cf116f16714d04ec96,PodSandboxId:8fd8c4a1941cadd559b51da7b96b95d27f98cfdf47952563226a91f64bb269df,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709233359440340624,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-991128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f9594481c9c21af2b85fe50da50c97f,},Annotations:map
[string]string{io.kubernetes.container.hash: 68d8cdba,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3e376668-b37a-499d-a2fe-e9de70c16aab name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:18:12 embed-certs-991128 crio[671]: time="2024-02-29 19:18:12.237849177Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9951dae2-76cd-47bf-8862-8172c45982fc name=/runtime.v1.RuntimeService/Version
	Feb 29 19:18:12 embed-certs-991128 crio[671]: time="2024-02-29 19:18:12.237932868Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9951dae2-76cd-47bf-8862-8172c45982fc name=/runtime.v1.RuntimeService/Version
	Feb 29 19:18:12 embed-certs-991128 crio[671]: time="2024-02-29 19:18:12.240045574Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bcd896ec-95a3-4dec-bcfd-3d714e4cc8e0 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 19:18:12 embed-certs-991128 crio[671]: time="2024-02-29 19:18:12.240576233Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709234292240539282,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bcd896ec-95a3-4dec-bcfd-3d714e4cc8e0 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 19:18:12 embed-certs-991128 crio[671]: time="2024-02-29 19:18:12.241592692Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b6c0b2ac-83da-482d-b259-b916f05bc675 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:18:12 embed-certs-991128 crio[671]: time="2024-02-29 19:18:12.241649238Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b6c0b2ac-83da-482d-b259-b916f05bc675 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:18:12 embed-certs-991128 crio[671]: time="2024-02-29 19:18:12.241896799Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6d4d0c25cc63929278e7e44eb4cd9bc93f376b8bb76a05b78a29d9d9b9794ada,PodSandboxId:d021343dc78c4c8fff740ae383784d90d75d3ca0eb97f4f9680d5d1d7496b029,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709233381501478855,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9ce642e-81dc-4dd7-be8e-3796e19f8f03,},Annotations:map[string]string{io.kubernetes.container.hash: 28dd27d7,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7220454898e1219d80ac33161ddc4116866d5cf1d1382767c1bf456cfa6b3c72,PodSandboxId:e8b69c01808092e60eb2934c57c1b4ab3db6198e2df112c64b87974d8dbadd2f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709233379574574830,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-nth8z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eeec9c32-9f61-4cb7-b1fb-3dd75c5af668,},Annotations:map[string]string{io.kubernetes.container.hash: 266168b7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3327a9756b71a79f82db55da77bd2297066ea76c84094764ff2fcbd04e4f528d,PodSandboxId:542b014e67e872c2082e9249b966712bb148b172e0a38ece70d5c85bb0f20f34,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709233379070088123,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5grst,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 355
24449-8c5a-440d-a45f-ce631ebff076,},Annotations:map[string]string{io.kubernetes.container.hash: ac0db45a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:795516eef7b679f1eefaf293eab6b1683326e3fcb3f4e5f848194e4a7ab1566e,PodSandboxId:4199ed14d97b0118203b50e45f45ab826ce09cf0cc4da0ef56dbee5cce4b9101,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709233359498073327,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-991128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f60d3a28ff8f8340730bf0057041fb20,},Annota
tions:map[string]string{io.kubernetes.container.hash: 13b0311e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1accc151694b40814505327a0743f46796dacf516f08043f58b1c42147e25fe,PodSandboxId:814a5a953c233b6d0febf2ff987abd74715833ed7cafd0554b1076e62af233c4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709233359444387154,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-991128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e22c8a948f076983154faaffa6d2b95,},Annotations:map[st
ring]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9099ab49263e57e41e6b7b28a87e810a951b495e3d0c879bf7ac045d0d2c2bd0,PodSandboxId:31761f95bbfbbe203a3cba92428b86af56068633459259fe1714dce8e1217961,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709233359451841969,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-991128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cea9e64667edc13c8ed77ee608a410bf,},Ann
otations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18f508cd43779fa9b03e93445f4d03abfe5f3e6291dc05cf116f16714d04ec96,PodSandboxId:8fd8c4a1941cadd559b51da7b96b95d27f98cfdf47952563226a91f64bb269df,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709233359440340624,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-991128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f9594481c9c21af2b85fe50da50c97f,},Annotations:map
[string]string{io.kubernetes.container.hash: 68d8cdba,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b6c0b2ac-83da-482d-b259-b916f05bc675 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:18:12 embed-certs-991128 crio[671]: time="2024-02-29 19:18:12.281532660Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=feeb85aa-17a1-44b2-ab50-64d1f9f96d9f name=/runtime.v1.RuntimeService/Version
	Feb 29 19:18:12 embed-certs-991128 crio[671]: time="2024-02-29 19:18:12.281600645Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=feeb85aa-17a1-44b2-ab50-64d1f9f96d9f name=/runtime.v1.RuntimeService/Version
	Feb 29 19:18:12 embed-certs-991128 crio[671]: time="2024-02-29 19:18:12.283250453Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a6ac44f5-33d3-4a29-bcad-5b1522a13218 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 19:18:12 embed-certs-991128 crio[671]: time="2024-02-29 19:18:12.284026488Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709234292284003089,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a6ac44f5-33d3-4a29-bcad-5b1522a13218 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 19:18:12 embed-certs-991128 crio[671]: time="2024-02-29 19:18:12.285455281Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ea3f4595-aba0-43eb-ae16-9d01617ea683 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:18:12 embed-certs-991128 crio[671]: time="2024-02-29 19:18:12.285507064Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ea3f4595-aba0-43eb-ae16-9d01617ea683 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:18:12 embed-certs-991128 crio[671]: time="2024-02-29 19:18:12.285669191Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6d4d0c25cc63929278e7e44eb4cd9bc93f376b8bb76a05b78a29d9d9b9794ada,PodSandboxId:d021343dc78c4c8fff740ae383784d90d75d3ca0eb97f4f9680d5d1d7496b029,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709233381501478855,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9ce642e-81dc-4dd7-be8e-3796e19f8f03,},Annotations:map[string]string{io.kubernetes.container.hash: 28dd27d7,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7220454898e1219d80ac33161ddc4116866d5cf1d1382767c1bf456cfa6b3c72,PodSandboxId:e8b69c01808092e60eb2934c57c1b4ab3db6198e2df112c64b87974d8dbadd2f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709233379574574830,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-nth8z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eeec9c32-9f61-4cb7-b1fb-3dd75c5af668,},Annotations:map[string]string{io.kubernetes.container.hash: 266168b7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3327a9756b71a79f82db55da77bd2297066ea76c84094764ff2fcbd04e4f528d,PodSandboxId:542b014e67e872c2082e9249b966712bb148b172e0a38ece70d5c85bb0f20f34,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709233379070088123,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5grst,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 355
24449-8c5a-440d-a45f-ce631ebff076,},Annotations:map[string]string{io.kubernetes.container.hash: ac0db45a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:795516eef7b679f1eefaf293eab6b1683326e3fcb3f4e5f848194e4a7ab1566e,PodSandboxId:4199ed14d97b0118203b50e45f45ab826ce09cf0cc4da0ef56dbee5cce4b9101,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709233359498073327,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-991128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f60d3a28ff8f8340730bf0057041fb20,},Annota
tions:map[string]string{io.kubernetes.container.hash: 13b0311e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1accc151694b40814505327a0743f46796dacf516f08043f58b1c42147e25fe,PodSandboxId:814a5a953c233b6d0febf2ff987abd74715833ed7cafd0554b1076e62af233c4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709233359444387154,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-991128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e22c8a948f076983154faaffa6d2b95,},Annotations:map[st
ring]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9099ab49263e57e41e6b7b28a87e810a951b495e3d0c879bf7ac045d0d2c2bd0,PodSandboxId:31761f95bbfbbe203a3cba92428b86af56068633459259fe1714dce8e1217961,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709233359451841969,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-991128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cea9e64667edc13c8ed77ee608a410bf,},Ann
otations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18f508cd43779fa9b03e93445f4d03abfe5f3e6291dc05cf116f16714d04ec96,PodSandboxId:8fd8c4a1941cadd559b51da7b96b95d27f98cfdf47952563226a91f64bb269df,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709233359440340624,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-991128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f9594481c9c21af2b85fe50da50c97f,},Annotations:map
[string]string{io.kubernetes.container.hash: 68d8cdba,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ea3f4595-aba0-43eb-ae16-9d01617ea683 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6d4d0c25cc639       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 minutes ago      Running             storage-provisioner       0                   d021343dc78c4       storage-provisioner
	7220454898e12       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   15 minutes ago      Running             coredns                   0                   e8b69c0180809       coredns-5dd5756b68-nth8z
	3327a9756b71a       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   15 minutes ago      Running             kube-proxy                0                   542b014e67e87       kube-proxy-5grst
	795516eef7b67       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   15 minutes ago      Running             etcd                      2                   4199ed14d97b0       etcd-embed-certs-991128
	9099ab49263e5       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   15 minutes ago      Running             kube-controller-manager   2                   31761f95bbfbb       kube-controller-manager-embed-certs-991128
	f1accc151694b       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   15 minutes ago      Running             kube-scheduler            2                   814a5a953c233       kube-scheduler-embed-certs-991128
	18f508cd43779       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   15 minutes ago      Running             kube-apiserver            2                   8fd8c4a1941ca       kube-apiserver-embed-certs-991128
	
	
	==> coredns [7220454898e1219d80ac33161ddc4116866d5cf1d1382767c1bf456cfa6b3c72] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	[INFO] Reloading complete
	[INFO] 127.0.0.1:57010 - 5854 "HINFO IN 4633225628833145899.670971604328587180. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.015183249s
	
	
	==> describe nodes <==
	Name:               embed-certs-991128
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-991128
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f9e09574fdbf0719156a1e892f7aeb8b71f0cf19
	                    minikube.k8s.io/name=embed-certs-991128
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_29T19_02_46_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Feb 2024 19:02:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-991128
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Feb 2024 19:18:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Feb 2024 19:13:19 +0000   Thu, 29 Feb 2024 19:02:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Feb 2024 19:13:19 +0000   Thu, 29 Feb 2024 19:02:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Feb 2024 19:13:19 +0000   Thu, 29 Feb 2024 19:02:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Feb 2024 19:13:19 +0000   Thu, 29 Feb 2024 19:02:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.34
	  Hostname:    embed-certs-991128
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 60eb2ab6d53b4d4cad87df9e82bf910b
	  System UUID:                60eb2ab6-d53b-4d4c-ad87-df9e82bf910b
	  Boot ID:                    3d3f6535-305d-44f2-ad07-f57f11ba5710
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-nth8z                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-embed-certs-991128                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kube-apiserver-embed-certs-991128             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-embed-certs-991128    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-5grst                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-embed-certs-991128             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 metrics-server-57f55c9bc5-r66xw               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15m                kube-proxy       
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node embed-certs-991128 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node embed-certs-991128 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node embed-certs-991128 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m                kubelet          Node embed-certs-991128 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m                kubelet          Node embed-certs-991128 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m                kubelet          Node embed-certs-991128 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             15m                kubelet          Node embed-certs-991128 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                15m                kubelet          Node embed-certs-991128 status is now: NodeReady
	  Normal  RegisteredNode           15m                node-controller  Node embed-certs-991128 event: Registered Node embed-certs-991128 in Controller
	
	
	==> dmesg <==
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051235] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041921] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.527779] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.311241] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.714758] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.348116] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.063973] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062665] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.231859] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +0.144249] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.278556] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[ +17.190029] systemd-fstab-generator[870]: Ignoring "noauto" option for root device
	[  +0.063270] kauditd_printk_skb: 130 callbacks suppressed
	[  +5.700407] kauditd_printk_skb: 72 callbacks suppressed
	[  +7.468648] kauditd_printk_skb: 69 callbacks suppressed
	[Feb29 19:02] kauditd_printk_skb: 4 callbacks suppressed
	[  +1.234285] systemd-fstab-generator[3362]: Ignoring "noauto" option for root device
	[  +4.665382] kauditd_printk_skb: 55 callbacks suppressed
	[  +3.122649] systemd-fstab-generator[3683]: Ignoring "noauto" option for root device
	[ +12.974486] kauditd_printk_skb: 14 callbacks suppressed
	[Feb29 19:03] kauditd_printk_skb: 43 callbacks suppressed
	
	
	==> etcd [795516eef7b679f1eefaf293eab6b1683326e3fcb3f4e5f848194e4a7ab1566e] <==
	{"level":"info","ts":"2024-02-29T19:02:40.235943Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"860cec0469348f9b became pre-candidate at term 1"}
	{"level":"info","ts":"2024-02-29T19:02:40.235977Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"860cec0469348f9b received MsgPreVoteResp from 860cec0469348f9b at term 1"}
	{"level":"info","ts":"2024-02-29T19:02:40.236007Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"860cec0469348f9b became candidate at term 2"}
	{"level":"info","ts":"2024-02-29T19:02:40.236031Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"860cec0469348f9b received MsgVoteResp from 860cec0469348f9b at term 2"}
	{"level":"info","ts":"2024-02-29T19:02:40.236058Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"860cec0469348f9b became leader at term 2"}
	{"level":"info","ts":"2024-02-29T19:02:40.236082Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 860cec0469348f9b elected leader 860cec0469348f9b at term 2"}
	{"level":"info","ts":"2024-02-29T19:02:40.23931Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"860cec0469348f9b","local-member-attributes":"{Name:embed-certs-991128 ClientURLs:[https://192.168.61.34:2379]}","request-path":"/0/members/860cec0469348f9b/attributes","cluster-id":"3b988ca96e7ba1f2","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-29T19:02:40.24177Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T19:02:40.244933Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.34:2379"}
	{"level":"info","ts":"2024-02-29T19:02:40.241787Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T19:02:40.246665Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-29T19:02:40.250238Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T19:02:40.250807Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-29T19:02:40.267814Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-29T19:02:40.267877Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3b988ca96e7ba1f2","local-member-id":"860cec0469348f9b","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T19:02:40.267963Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T19:02:40.268003Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T19:12:40.471107Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":715}
	{"level":"info","ts":"2024-02-29T19:12:40.474106Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":715,"took":"2.630272ms","hash":2473211201}
	{"level":"info","ts":"2024-02-29T19:12:40.474176Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2473211201,"revision":715,"compact-revision":-1}
	{"level":"info","ts":"2024-02-29T19:17:32.767652Z","caller":"traceutil/trace.go:171","msg":"trace[525580931] transaction","detail":"{read_only:false; response_revision:1195; number_of_response:1; }","duration":"246.99833ms","start":"2024-02-29T19:17:32.52061Z","end":"2024-02-29T19:17:32.767608Z","steps":["trace[525580931] 'process raft request'  (duration: 246.564262ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-29T19:17:33.126825Z","caller":"traceutil/trace.go:171","msg":"trace[946979480] transaction","detail":"{read_only:false; response_revision:1196; number_of_response:1; }","duration":"120.440036ms","start":"2024-02-29T19:17:33.006254Z","end":"2024-02-29T19:17:33.126694Z","steps":["trace[946979480] 'process raft request'  (duration: 120.298228ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-29T19:17:40.480176Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":958}
	{"level":"info","ts":"2024-02-29T19:17:40.48236Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":958,"took":"1.822663ms","hash":3148901411}
	{"level":"info","ts":"2024-02-29T19:17:40.482439Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3148901411,"revision":958,"compact-revision":715}
	
	
	==> kernel <==
	 19:18:12 up 21 min,  0 users,  load average: 0.09, 0.14, 0.11
	Linux embed-certs-991128 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [18f508cd43779fa9b03e93445f4d03abfe5f3e6291dc05cf116f16714d04ec96] <==
	E0229 19:13:43.618837       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0229 19:13:43.618871       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0229 19:14:42.517189       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0229 19:15:42.516069       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0229 19:15:43.618624       1 handler_proxy.go:93] no RequestInfo found in the context
	E0229 19:15:43.618690       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0229 19:15:43.618697       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0229 19:15:43.618985       1 handler_proxy.go:93] no RequestInfo found in the context
	E0229 19:15:43.619107       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0229 19:15:43.620806       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0229 19:16:42.516111       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0229 19:17:42.516296       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0229 19:17:42.621669       1 handler_proxy.go:93] no RequestInfo found in the context
	E0229 19:17:42.621905       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0229 19:17:42.622238       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0229 19:17:43.623116       1 handler_proxy.go:93] no RequestInfo found in the context
	E0229 19:17:43.623183       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0229 19:17:43.623191       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0229 19:17:43.623269       1 handler_proxy.go:93] no RequestInfo found in the context
	E0229 19:17:43.623350       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0229 19:17:43.624610       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [9099ab49263e57e41e6b7b28a87e810a951b495e3d0c879bf7ac045d0d2c2bd0] <==
	I0229 19:12:28.915833       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 19:12:58.526522       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 19:12:58.924893       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 19:13:28.534079       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 19:13:28.935873       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 19:13:58.541950       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 19:13:58.945190       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0229 19:14:16.386401       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="295.527µs"
	I0229 19:14:27.381792       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="311.153µs"
	E0229 19:14:28.547330       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 19:14:28.953634       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 19:14:58.554510       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 19:14:58.963241       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 19:15:28.560222       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 19:15:28.972179       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 19:15:58.567094       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 19:15:58.981412       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 19:16:28.573239       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 19:16:28.989920       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 19:16:58.581079       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 19:16:59.000372       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 19:17:28.588245       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 19:17:29.011452       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 19:17:58.597160       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 19:17:59.022490       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [3327a9756b71a79f82db55da77bd2297066ea76c84094764ff2fcbd04e4f528d] <==
	I0229 19:03:00.180811       1 server_others.go:69] "Using iptables proxy"
	I0229 19:03:00.212117       1 node.go:141] Successfully retrieved node IP: 192.168.61.34
	I0229 19:03:00.381674       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0229 19:03:00.381697       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0229 19:03:00.386578       1 server_others.go:152] "Using iptables Proxier"
	I0229 19:03:00.386678       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0229 19:03:00.387250       1 server.go:846] "Version info" version="v1.28.4"
	I0229 19:03:00.387347       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0229 19:03:00.391407       1 config.go:188] "Starting service config controller"
	I0229 19:03:00.391962       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0229 19:03:00.391999       1 config.go:97] "Starting endpoint slice config controller"
	I0229 19:03:00.392053       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0229 19:03:00.395031       1 config.go:315] "Starting node config controller"
	I0229 19:03:00.395039       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0229 19:03:00.492543       1 shared_informer.go:318] Caches are synced for service config
	I0229 19:03:00.492559       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0229 19:03:00.495858       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [f1accc151694b40814505327a0743f46796dacf516f08043f58b1c42147e25fe] <==
	W0229 19:02:43.571026       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0229 19:02:43.571161       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0229 19:02:43.619457       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0229 19:02:43.619524       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0229 19:02:43.627008       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0229 19:02:43.627057       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0229 19:02:43.681069       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0229 19:02:43.681255       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0229 19:02:43.734002       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0229 19:02:43.734878       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0229 19:02:43.799372       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0229 19:02:43.799500       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0229 19:02:43.848210       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0229 19:02:43.848268       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0229 19:02:43.850520       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0229 19:02:43.850539       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0229 19:02:43.909247       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0229 19:02:43.909423       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0229 19:02:43.939099       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0229 19:02:43.939149       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0229 19:02:43.942837       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0229 19:02:43.942882       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0229 19:02:43.976828       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0229 19:02:43.976918       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0229 19:02:46.746332       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 29 19:15:46 embed-certs-991128 kubelet[3690]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 29 19:15:46 embed-certs-991128 kubelet[3690]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 29 19:15:46 embed-certs-991128 kubelet[3690]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 29 19:15:46 embed-certs-991128 kubelet[3690]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 29 19:15:52 embed-certs-991128 kubelet[3690]: E0229 19:15:52.364639    3690 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-r66xw" podUID="8eb63357-6b36-49f3-98a5-c74bb4a9b09c"
	Feb 29 19:16:05 embed-certs-991128 kubelet[3690]: E0229 19:16:05.364006    3690 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-r66xw" podUID="8eb63357-6b36-49f3-98a5-c74bb4a9b09c"
	Feb 29 19:16:20 embed-certs-991128 kubelet[3690]: E0229 19:16:20.365669    3690 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-r66xw" podUID="8eb63357-6b36-49f3-98a5-c74bb4a9b09c"
	Feb 29 19:16:34 embed-certs-991128 kubelet[3690]: E0229 19:16:34.364283    3690 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-r66xw" podUID="8eb63357-6b36-49f3-98a5-c74bb4a9b09c"
	Feb 29 19:16:46 embed-certs-991128 kubelet[3690]: E0229 19:16:46.468634    3690 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 29 19:16:46 embed-certs-991128 kubelet[3690]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 29 19:16:46 embed-certs-991128 kubelet[3690]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 29 19:16:46 embed-certs-991128 kubelet[3690]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 29 19:16:46 embed-certs-991128 kubelet[3690]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 29 19:16:47 embed-certs-991128 kubelet[3690]: E0229 19:16:47.364335    3690 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-r66xw" podUID="8eb63357-6b36-49f3-98a5-c74bb4a9b09c"
	Feb 29 19:17:01 embed-certs-991128 kubelet[3690]: E0229 19:17:01.364391    3690 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-r66xw" podUID="8eb63357-6b36-49f3-98a5-c74bb4a9b09c"
	Feb 29 19:17:14 embed-certs-991128 kubelet[3690]: E0229 19:17:14.363492    3690 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-r66xw" podUID="8eb63357-6b36-49f3-98a5-c74bb4a9b09c"
	Feb 29 19:17:28 embed-certs-991128 kubelet[3690]: E0229 19:17:28.365543    3690 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-r66xw" podUID="8eb63357-6b36-49f3-98a5-c74bb4a9b09c"
	Feb 29 19:17:42 embed-certs-991128 kubelet[3690]: E0229 19:17:42.365093    3690 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-r66xw" podUID="8eb63357-6b36-49f3-98a5-c74bb4a9b09c"
	Feb 29 19:17:46 embed-certs-991128 kubelet[3690]: E0229 19:17:46.470317    3690 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 29 19:17:46 embed-certs-991128 kubelet[3690]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 29 19:17:46 embed-certs-991128 kubelet[3690]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 29 19:17:46 embed-certs-991128 kubelet[3690]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 29 19:17:46 embed-certs-991128 kubelet[3690]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 29 19:17:53 embed-certs-991128 kubelet[3690]: E0229 19:17:53.364486    3690 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-r66xw" podUID="8eb63357-6b36-49f3-98a5-c74bb4a9b09c"
	Feb 29 19:18:06 embed-certs-991128 kubelet[3690]: E0229 19:18:06.366119    3690 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-r66xw" podUID="8eb63357-6b36-49f3-98a5-c74bb4a9b09c"
	
	
	==> storage-provisioner [6d4d0c25cc63929278e7e44eb4cd9bc93f376b8bb76a05b78a29d9d9b9794ada] <==
	I0229 19:03:01.598428       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0229 19:03:01.615668       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0229 19:03:01.615843       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0229 19:03:01.645506       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d3c0560f-6c58-46c3-9e8c-87fe1f4fcc81", APIVersion:"v1", ResourceVersion:"461", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-991128_9133536c-f38b-4982-9f58-caff1afaff74 became leader
	I0229 19:03:01.646855       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0229 19:03:01.647128       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-991128_9133536c-f38b-4982-9f58-caff1afaff74!
	I0229 19:03:01.748366       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-991128_9133536c-f38b-4982-9f58-caff1afaff74!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-991128 -n embed-certs-991128
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-991128 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-r66xw
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-991128 describe pod metrics-server-57f55c9bc5-r66xw
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-991128 describe pod metrics-server-57f55c9bc5-r66xw: exit status 1 (64.373155ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-r66xw" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-991128 describe pod metrics-server-57f55c9bc5-r66xw: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (114.68s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (168.66s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-153528 -n default-k8s-diff-port-153528
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-02-29 19:19:47.306397261 +0000 UTC m=+6144.801381034
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-153528 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-153528 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.365µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-153528 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-153528 -n default-k8s-diff-port-153528
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-153528 logs -n 25
E0229 19:19:47.756073   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/no-preload-247197/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-153528 logs -n 25: (1.436115205s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable dashboard -p embed-certs-991128                 | embed-certs-991128           | jenkins | v1.32.0 | 29 Feb 24 18:52 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-247197                                   | no-preload-247197            | jenkins | v1.32.0 | 29 Feb 24 18:52 UTC | 29 Feb 24 19:08 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| start   | -p embed-certs-991128                                  | embed-certs-991128           | jenkins | v1.32.0 | 29 Feb 24 18:52 UTC | 29 Feb 24 19:07 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-631080                              | old-k8s-version-631080       | jenkins | v1.32.0 | 29 Feb 24 18:53 UTC | 29 Feb 24 18:53 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-631080             | old-k8s-version-631080       | jenkins | v1.32.0 | 29 Feb 24 18:53 UTC | 29 Feb 24 18:53 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-631080                              | old-k8s-version-631080       | jenkins | v1.32.0 | 29 Feb 24 18:53 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-153528       | default-k8s-diff-port-153528 | jenkins | v1.32.0 | 29 Feb 24 18:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-153528 | jenkins | v1.32.0 | 29 Feb 24 18:53 UTC | 29 Feb 24 19:07 UTC |
	|         | default-k8s-diff-port-153528                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-631080                              | old-k8s-version-631080       | jenkins | v1.32.0 | 29 Feb 24 19:16 UTC | 29 Feb 24 19:16 UTC |
	| start   | -p newest-cni-130594 --memory=2200 --alsologtostderr   | newest-cni-130594            | jenkins | v1.32.0 | 29 Feb 24 19:16 UTC | 29 Feb 24 19:17 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p no-preload-247197                                   | no-preload-247197            | jenkins | v1.32.0 | 29 Feb 24 19:17 UTC | 29 Feb 24 19:17 UTC |
	| start   | -p auto-587185 --memory=3072                           | auto-587185                  | jenkins | v1.32.0 | 29 Feb 24 19:17 UTC | 29 Feb 24 19:19 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-130594             | newest-cni-130594            | jenkins | v1.32.0 | 29 Feb 24 19:17 UTC | 29 Feb 24 19:17 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-130594                                   | newest-cni-130594            | jenkins | v1.32.0 | 29 Feb 24 19:18 UTC | 29 Feb 24 19:18 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-130594                  | newest-cni-130594            | jenkins | v1.32.0 | 29 Feb 24 19:18 UTC | 29 Feb 24 19:18 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-130594 --memory=2200 --alsologtostderr   | newest-cni-130594            | jenkins | v1.32.0 | 29 Feb 24 19:18 UTC | 29 Feb 24 19:19 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p embed-certs-991128                                  | embed-certs-991128           | jenkins | v1.32.0 | 29 Feb 24 19:18 UTC | 29 Feb 24 19:18 UTC |
	| start   | -p kindnet-587185                                      | kindnet-587185               | jenkins | v1.32.0 | 29 Feb 24 19:18 UTC | 29 Feb 24 19:19 UTC |
	|         | --memory=3072                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2                            |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| image   | newest-cni-130594 image list                           | newest-cni-130594            | jenkins | v1.32.0 | 29 Feb 24 19:19 UTC | 29 Feb 24 19:19 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-130594                                   | newest-cni-130594            | jenkins | v1.32.0 | 29 Feb 24 19:19 UTC | 29 Feb 24 19:19 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-130594                                   | newest-cni-130594            | jenkins | v1.32.0 | 29 Feb 24 19:19 UTC | 29 Feb 24 19:19 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-130594                                   | newest-cni-130594            | jenkins | v1.32.0 | 29 Feb 24 19:19 UTC | 29 Feb 24 19:19 UTC |
	| delete  | -p newest-cni-130594                                   | newest-cni-130594            | jenkins | v1.32.0 | 29 Feb 24 19:19 UTC | 29 Feb 24 19:19 UTC |
	| start   | -p calico-587185 --memory=3072                         | calico-587185                | jenkins | v1.32.0 | 29 Feb 24 19:19 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --cni=calico --driver=kvm2                             |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| ssh     | -p auto-587185 pgrep -a                                | auto-587185                  | jenkins | v1.32.0 | 29 Feb 24 19:19 UTC | 29 Feb 24 19:19 UTC |
	|         | kubelet                                                |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 19:19:09
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 19:19:09.924339   55255 out.go:291] Setting OutFile to fd 1 ...
	I0229 19:19:09.926253   55255 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 19:19:09.926269   55255 out.go:304] Setting ErrFile to fd 2...
	I0229 19:19:09.926275   55255 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 19:19:09.926641   55255 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6428/.minikube/bin
	I0229 19:19:09.927498   55255 out.go:298] Setting JSON to false
	I0229 19:19:09.928427   55255 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7294,"bootTime":1709227056,"procs":231,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 19:19:09.928515   55255 start.go:139] virtualization: kvm guest
	I0229 19:19:10.004252   55255 out.go:177] * [calico-587185] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 19:19:10.067192   55255 out.go:177]   - MINIKUBE_LOCATION=18259
	I0229 19:19:10.067120   55255 notify.go:220] Checking for updates...
	I0229 19:19:10.069888   55255 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 19:19:10.071477   55255 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18259-6428/kubeconfig
	I0229 19:19:10.073077   55255 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6428/.minikube
	I0229 19:19:10.075051   55255 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 19:19:10.076463   55255 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 19:19:10.078444   55255 config.go:182] Loaded profile config "auto-587185": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 19:19:10.078541   55255 config.go:182] Loaded profile config "default-k8s-diff-port-153528": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 19:19:10.078624   55255 config.go:182] Loaded profile config "kindnet-587185": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 19:19:10.078705   55255 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 19:19:10.117306   55255 out.go:177] * Using the kvm2 driver based on user configuration
	I0229 19:19:10.118511   55255 start.go:299] selected driver: kvm2
	I0229 19:19:10.118521   55255 start.go:903] validating driver "kvm2" against <nil>
	I0229 19:19:10.118532   55255 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 19:19:10.119231   55255 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 19:19:10.119296   55255 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18259-6428/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 19:19:10.134247   55255 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 19:19:10.134293   55255 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 19:19:10.134550   55255 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0229 19:19:10.134638   55255 cni.go:84] Creating CNI manager for "calico"
	I0229 19:19:10.134656   55255 start_flags.go:318] Found "Calico" CNI - setting NetworkPlugin=cni
	I0229 19:19:10.134669   55255 start_flags.go:323] config:
	{Name:calico-587185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:calico-587185 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 19:19:10.134850   55255 iso.go:125] acquiring lock: {Name:mk4b55cee5696fff78a1389276e4e011ad655e56 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 19:19:10.136777   55255 out.go:177] * Starting control plane node calico-587185 in cluster calico-587185
	I0229 19:19:10.138187   55255 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0229 19:19:10.138219   55255 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0229 19:19:10.138226   55255 cache.go:56] Caching tarball of preloaded images
	I0229 19:19:10.138292   55255 preload.go:174] Found /home/jenkins/minikube-integration/18259-6428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0229 19:19:10.138302   55255 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0229 19:19:10.138384   55255 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/calico-587185/config.json ...
	I0229 19:19:10.138402   55255 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/calico-587185/config.json: {Name:mk5722a13c5d9e566254b8743c7055dfe664991f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:19:10.138522   55255 start.go:365] acquiring machines lock for calico-587185: {Name:mk08b242c6b6b7cd4a6843a7d3821a8b435540dd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 19:19:10.138555   55255 start.go:369] acquired machines lock for "calico-587185" in 20.794µs
	I0229 19:19:10.138587   55255 start.go:93] Provisioning new machine with config: &{Name:calico-587185 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:calico-587185 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0229 19:19:10.138660   55255 start.go:125] createHost starting for "" (driver="kvm2")
	I0229 19:19:07.065337   53770 pod_ready.go:102] pod "coredns-5dd5756b68-tt7cg" in "kube-system" namespace has status "Ready":"False"
	I0229 19:19:09.084243   53770 pod_ready.go:102] pod "coredns-5dd5756b68-tt7cg" in "kube-system" namespace has status "Ready":"False"
	I0229 19:19:11.561853   53770 pod_ready.go:102] pod "coredns-5dd5756b68-tt7cg" in "kube-system" namespace has status "Ready":"False"
	I0229 19:19:09.536946   54399 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/kindnet-587185/proxy-client.key ...
	I0229 19:19:09.671735   54399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/kindnet-587185/proxy-client.key: {Name:mk5f91099aa3ce99f97ab7d2b100c34b9c7d904f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:19:09.672098   54399 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651.pem (1338 bytes)
	W0229 19:19:09.672146   54399 certs.go:433] ignoring /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651_empty.pem, impossibly tiny 0 bytes
	I0229 19:19:09.672161   54399 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem (1675 bytes)
	I0229 19:19:09.672195   54399 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem (1082 bytes)
	I0229 19:19:09.672236   54399 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem (1123 bytes)
	I0229 19:19:09.672272   54399 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem (1675 bytes)
	I0229 19:19:09.672325   54399 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem (1708 bytes)
	I0229 19:19:09.673089   54399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/kindnet-587185/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 19:19:09.705423   54399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/kindnet-587185/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0229 19:19:09.737971   54399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/kindnet-587185/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 19:19:09.771829   54399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/kindnet-587185/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0229 19:19:09.800994   54399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 19:19:09.832039   54399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0229 19:19:09.859004   54399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 19:19:09.889857   54399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0229 19:19:09.925890   54399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 19:19:09.954404   54399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651.pem --> /usr/share/ca-certificates/13651.pem (1338 bytes)
	I0229 19:19:09.981479   54399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem --> /usr/share/ca-certificates/136512.pem (1708 bytes)
	I0229 19:19:10.009865   54399 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 19:19:10.029655   54399 ssh_runner.go:195] Run: openssl version
	I0229 19:19:10.036541   54399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 19:19:10.049282   54399 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 19:19:10.055918   54399 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 17:40 /usr/share/ca-certificates/minikubeCA.pem
	I0229 19:19:10.055997   54399 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 19:19:10.065138   54399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 19:19:10.082038   54399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13651.pem && ln -fs /usr/share/ca-certificates/13651.pem /etc/ssl/certs/13651.pem"
	I0229 19:19:10.095601   54399 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13651.pem
	I0229 19:19:10.101746   54399 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 17:49 /usr/share/ca-certificates/13651.pem
	I0229 19:19:10.101816   54399 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13651.pem
	I0229 19:19:10.108847   54399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13651.pem /etc/ssl/certs/51391683.0"
	I0229 19:19:10.122109   54399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136512.pem && ln -fs /usr/share/ca-certificates/136512.pem /etc/ssl/certs/136512.pem"
	I0229 19:19:10.134610   54399 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136512.pem
	I0229 19:19:10.140554   54399 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 17:49 /usr/share/ca-certificates/136512.pem
	I0229 19:19:10.140605   54399 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136512.pem
	I0229 19:19:10.146964   54399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136512.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 19:19:10.159868   54399 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 19:19:10.164709   54399 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 19:19:10.164756   54399 kubeadm.go:404] StartCluster: {Name:kindnet-587185 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:kindnet-587185 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.15 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 19:19:10.164881   54399 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0229 19:19:10.164935   54399 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 19:19:10.225785   54399 cri.go:89] found id: ""
	I0229 19:19:10.225853   54399 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 19:19:10.237210   54399 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 19:19:10.250293   54399 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 19:19:10.261433   54399 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 19:19:10.261483   54399 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0229 19:19:10.475562   54399 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 19:19:10.140168   55255 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0229 19:19:10.140328   55255 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:19:10.140372   55255 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:19:10.156059   55255 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44593
	I0229 19:19:10.156495   55255 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:19:10.157023   55255 main.go:141] libmachine: Using API Version  1
	I0229 19:19:10.157048   55255 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:19:10.157380   55255 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:19:10.157580   55255 main.go:141] libmachine: (calico-587185) Calling .GetMachineName
	I0229 19:19:10.157742   55255 main.go:141] libmachine: (calico-587185) Calling .DriverName
	I0229 19:19:10.157904   55255 start.go:159] libmachine.API.Create for "calico-587185" (driver="kvm2")
	I0229 19:19:10.157942   55255 client.go:168] LocalClient.Create starting
	I0229 19:19:10.157979   55255 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem
	I0229 19:19:10.158018   55255 main.go:141] libmachine: Decoding PEM data...
	I0229 19:19:10.158038   55255 main.go:141] libmachine: Parsing certificate...
	I0229 19:19:10.158115   55255 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem
	I0229 19:19:10.158144   55255 main.go:141] libmachine: Decoding PEM data...
	I0229 19:19:10.158166   55255 main.go:141] libmachine: Parsing certificate...
	I0229 19:19:10.158189   55255 main.go:141] libmachine: Running pre-create checks...
	I0229 19:19:10.158200   55255 main.go:141] libmachine: (calico-587185) Calling .PreCreateCheck
	I0229 19:19:10.158632   55255 main.go:141] libmachine: (calico-587185) Calling .GetConfigRaw
	I0229 19:19:10.159049   55255 main.go:141] libmachine: Creating machine...
	I0229 19:19:10.159064   55255 main.go:141] libmachine: (calico-587185) Calling .Create
	I0229 19:19:10.159204   55255 main.go:141] libmachine: (calico-587185) Creating KVM machine...
	I0229 19:19:10.160477   55255 main.go:141] libmachine: (calico-587185) DBG | found existing default KVM network
	I0229 19:19:10.161586   55255 main.go:141] libmachine: (calico-587185) DBG | I0229 19:19:10.161433   55277 network.go:212] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:3f:fc:f9} reservation:<nil>}
	I0229 19:19:10.162512   55255 main.go:141] libmachine: (calico-587185) DBG | I0229 19:19:10.162436   55277 network.go:212] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:af:3d:6d} reservation:<nil>}
	I0229 19:19:10.163463   55255 main.go:141] libmachine: (calico-587185) DBG | I0229 19:19:10.163391   55277 network.go:212] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:44:9f:b2} reservation:<nil>}
	I0229 19:19:10.164559   55255 main.go:141] libmachine: (calico-587185) DBG | I0229 19:19:10.164442   55277 network.go:207] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0003c65e0}
	I0229 19:19:10.169746   55255 main.go:141] libmachine: (calico-587185) DBG | trying to create private KVM network mk-calico-587185 192.168.72.0/24...
	I0229 19:19:10.245338   55255 main.go:141] libmachine: (calico-587185) DBG | private KVM network mk-calico-587185 192.168.72.0/24 created
	I0229 19:19:10.245380   55255 main.go:141] libmachine: (calico-587185) DBG | I0229 19:19:10.245158   55277 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18259-6428/.minikube
	I0229 19:19:10.245399   55255 main.go:141] libmachine: (calico-587185) Setting up store path in /home/jenkins/minikube-integration/18259-6428/.minikube/machines/calico-587185 ...
	I0229 19:19:10.245425   55255 main.go:141] libmachine: (calico-587185) Building disk image from file:///home/jenkins/minikube-integration/18259-6428/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso
	I0229 19:19:10.245449   55255 main.go:141] libmachine: (calico-587185) Downloading /home/jenkins/minikube-integration/18259-6428/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18259-6428/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0229 19:19:10.482912   55255 main.go:141] libmachine: (calico-587185) DBG | I0229 19:19:10.482774   55277 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/calico-587185/id_rsa...
	I0229 19:19:10.623100   55255 main.go:141] libmachine: (calico-587185) DBG | I0229 19:19:10.622936   55277 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/calico-587185/calico-587185.rawdisk...
	I0229 19:19:10.623139   55255 main.go:141] libmachine: (calico-587185) DBG | Writing magic tar header
	I0229 19:19:10.623160   55255 main.go:141] libmachine: (calico-587185) DBG | Writing SSH key tar header
	I0229 19:19:10.623174   55255 main.go:141] libmachine: (calico-587185) DBG | I0229 19:19:10.623106   55277 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18259-6428/.minikube/machines/calico-587185 ...
	I0229 19:19:10.623269   55255 main.go:141] libmachine: (calico-587185) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/calico-587185
	I0229 19:19:10.623309   55255 main.go:141] libmachine: (calico-587185) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6428/.minikube/machines
	I0229 19:19:10.623323   55255 main.go:141] libmachine: (calico-587185) Setting executable bit set on /home/jenkins/minikube-integration/18259-6428/.minikube/machines/calico-587185 (perms=drwx------)
	I0229 19:19:10.623339   55255 main.go:141] libmachine: (calico-587185) Setting executable bit set on /home/jenkins/minikube-integration/18259-6428/.minikube/machines (perms=drwxr-xr-x)
	I0229 19:19:10.623352   55255 main.go:141] libmachine: (calico-587185) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6428/.minikube
	I0229 19:19:10.623364   55255 main.go:141] libmachine: (calico-587185) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6428
	I0229 19:19:10.623377   55255 main.go:141] libmachine: (calico-587185) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0229 19:19:10.623390   55255 main.go:141] libmachine: (calico-587185) DBG | Checking permissions on dir: /home/jenkins
	I0229 19:19:10.623404   55255 main.go:141] libmachine: (calico-587185) Setting executable bit set on /home/jenkins/minikube-integration/18259-6428/.minikube (perms=drwxr-xr-x)
	I0229 19:19:10.623416   55255 main.go:141] libmachine: (calico-587185) DBG | Checking permissions on dir: /home
	I0229 19:19:10.623430   55255 main.go:141] libmachine: (calico-587185) Setting executable bit set on /home/jenkins/minikube-integration/18259-6428 (perms=drwxrwxr-x)
	I0229 19:19:10.623444   55255 main.go:141] libmachine: (calico-587185) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0229 19:19:10.623453   55255 main.go:141] libmachine: (calico-587185) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0229 19:19:10.623466   55255 main.go:141] libmachine: (calico-587185) DBG | Skipping /home - not owner
	I0229 19:19:10.623477   55255 main.go:141] libmachine: (calico-587185) Creating domain...
	I0229 19:19:10.624522   55255 main.go:141] libmachine: (calico-587185) define libvirt domain using xml: 
	I0229 19:19:10.624545   55255 main.go:141] libmachine: (calico-587185) <domain type='kvm'>
	I0229 19:19:10.624567   55255 main.go:141] libmachine: (calico-587185)   <name>calico-587185</name>
	I0229 19:19:10.624581   55255 main.go:141] libmachine: (calico-587185)   <memory unit='MiB'>3072</memory>
	I0229 19:19:10.624591   55255 main.go:141] libmachine: (calico-587185)   <vcpu>2</vcpu>
	I0229 19:19:10.624599   55255 main.go:141] libmachine: (calico-587185)   <features>
	I0229 19:19:10.624607   55255 main.go:141] libmachine: (calico-587185)     <acpi/>
	I0229 19:19:10.624614   55255 main.go:141] libmachine: (calico-587185)     <apic/>
	I0229 19:19:10.624625   55255 main.go:141] libmachine: (calico-587185)     <pae/>
	I0229 19:19:10.624632   55255 main.go:141] libmachine: (calico-587185)     
	I0229 19:19:10.624637   55255 main.go:141] libmachine: (calico-587185)   </features>
	I0229 19:19:10.624644   55255 main.go:141] libmachine: (calico-587185)   <cpu mode='host-passthrough'>
	I0229 19:19:10.624648   55255 main.go:141] libmachine: (calico-587185)   
	I0229 19:19:10.624654   55255 main.go:141] libmachine: (calico-587185)   </cpu>
	I0229 19:19:10.624660   55255 main.go:141] libmachine: (calico-587185)   <os>
	I0229 19:19:10.624667   55255 main.go:141] libmachine: (calico-587185)     <type>hvm</type>
	I0229 19:19:10.624672   55255 main.go:141] libmachine: (calico-587185)     <boot dev='cdrom'/>
	I0229 19:19:10.624677   55255 main.go:141] libmachine: (calico-587185)     <boot dev='hd'/>
	I0229 19:19:10.624682   55255 main.go:141] libmachine: (calico-587185)     <bootmenu enable='no'/>
	I0229 19:19:10.624690   55255 main.go:141] libmachine: (calico-587185)   </os>
	I0229 19:19:10.624695   55255 main.go:141] libmachine: (calico-587185)   <devices>
	I0229 19:19:10.624702   55255 main.go:141] libmachine: (calico-587185)     <disk type='file' device='cdrom'>
	I0229 19:19:10.624710   55255 main.go:141] libmachine: (calico-587185)       <source file='/home/jenkins/minikube-integration/18259-6428/.minikube/machines/calico-587185/boot2docker.iso'/>
	I0229 19:19:10.624717   55255 main.go:141] libmachine: (calico-587185)       <target dev='hdc' bus='scsi'/>
	I0229 19:19:10.624722   55255 main.go:141] libmachine: (calico-587185)       <readonly/>
	I0229 19:19:10.624727   55255 main.go:141] libmachine: (calico-587185)     </disk>
	I0229 19:19:10.624732   55255 main.go:141] libmachine: (calico-587185)     <disk type='file' device='disk'>
	I0229 19:19:10.624741   55255 main.go:141] libmachine: (calico-587185)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0229 19:19:10.624754   55255 main.go:141] libmachine: (calico-587185)       <source file='/home/jenkins/minikube-integration/18259-6428/.minikube/machines/calico-587185/calico-587185.rawdisk'/>
	I0229 19:19:10.624761   55255 main.go:141] libmachine: (calico-587185)       <target dev='hda' bus='virtio'/>
	I0229 19:19:10.624766   55255 main.go:141] libmachine: (calico-587185)     </disk>
	I0229 19:19:10.624772   55255 main.go:141] libmachine: (calico-587185)     <interface type='network'>
	I0229 19:19:10.624778   55255 main.go:141] libmachine: (calico-587185)       <source network='mk-calico-587185'/>
	I0229 19:19:10.624783   55255 main.go:141] libmachine: (calico-587185)       <model type='virtio'/>
	I0229 19:19:10.624788   55255 main.go:141] libmachine: (calico-587185)     </interface>
	I0229 19:19:10.624798   55255 main.go:141] libmachine: (calico-587185)     <interface type='network'>
	I0229 19:19:10.624804   55255 main.go:141] libmachine: (calico-587185)       <source network='default'/>
	I0229 19:19:10.624813   55255 main.go:141] libmachine: (calico-587185)       <model type='virtio'/>
	I0229 19:19:10.624843   55255 main.go:141] libmachine: (calico-587185)     </interface>
	I0229 19:19:10.624869   55255 main.go:141] libmachine: (calico-587185)     <serial type='pty'>
	I0229 19:19:10.624879   55255 main.go:141] libmachine: (calico-587185)       <target port='0'/>
	I0229 19:19:10.624889   55255 main.go:141] libmachine: (calico-587185)     </serial>
	I0229 19:19:10.624898   55255 main.go:141] libmachine: (calico-587185)     <console type='pty'>
	I0229 19:19:10.624910   55255 main.go:141] libmachine: (calico-587185)       <target type='serial' port='0'/>
	I0229 19:19:10.624920   55255 main.go:141] libmachine: (calico-587185)     </console>
	I0229 19:19:10.624930   55255 main.go:141] libmachine: (calico-587185)     <rng model='virtio'>
	I0229 19:19:10.624941   55255 main.go:141] libmachine: (calico-587185)       <backend model='random'>/dev/random</backend>
	I0229 19:19:10.624951   55255 main.go:141] libmachine: (calico-587185)     </rng>
	I0229 19:19:10.624972   55255 main.go:141] libmachine: (calico-587185)     
	I0229 19:19:10.624989   55255 main.go:141] libmachine: (calico-587185)     
	I0229 19:19:10.624998   55255 main.go:141] libmachine: (calico-587185)   </devices>
	I0229 19:19:10.625008   55255 main.go:141] libmachine: (calico-587185) </domain>
	I0229 19:19:10.625018   55255 main.go:141] libmachine: (calico-587185) 
	I0229 19:19:10.629259   55255 main.go:141] libmachine: (calico-587185) DBG | domain calico-587185 has defined MAC address 52:54:00:8f:ea:5b in network default
	I0229 19:19:10.629782   55255 main.go:141] libmachine: (calico-587185) Ensuring networks are active...
	I0229 19:19:10.629807   55255 main.go:141] libmachine: (calico-587185) DBG | domain calico-587185 has defined MAC address 52:54:00:2c:08:6d in network mk-calico-587185
	I0229 19:19:10.630492   55255 main.go:141] libmachine: (calico-587185) Ensuring network default is active
	I0229 19:19:10.630808   55255 main.go:141] libmachine: (calico-587185) Ensuring network mk-calico-587185 is active
	I0229 19:19:10.631344   55255 main.go:141] libmachine: (calico-587185) Getting domain xml...
	I0229 19:19:10.632212   55255 main.go:141] libmachine: (calico-587185) Creating domain...
	I0229 19:19:11.874987   55255 main.go:141] libmachine: (calico-587185) Waiting to get IP...
	I0229 19:19:11.875920   55255 main.go:141] libmachine: (calico-587185) DBG | domain calico-587185 has defined MAC address 52:54:00:2c:08:6d in network mk-calico-587185
	I0229 19:19:11.876310   55255 main.go:141] libmachine: (calico-587185) DBG | unable to find current IP address of domain calico-587185 in network mk-calico-587185
	I0229 19:19:11.876342   55255 main.go:141] libmachine: (calico-587185) DBG | I0229 19:19:11.876304   55277 retry.go:31] will retry after 227.966426ms: waiting for machine to come up
	I0229 19:19:12.105466   55255 main.go:141] libmachine: (calico-587185) DBG | domain calico-587185 has defined MAC address 52:54:00:2c:08:6d in network mk-calico-587185
	I0229 19:19:12.105988   55255 main.go:141] libmachine: (calico-587185) DBG | unable to find current IP address of domain calico-587185 in network mk-calico-587185
	I0229 19:19:12.106013   55255 main.go:141] libmachine: (calico-587185) DBG | I0229 19:19:12.105936   55277 retry.go:31] will retry after 371.618105ms: waiting for machine to come up
	I0229 19:19:12.479375   55255 main.go:141] libmachine: (calico-587185) DBG | domain calico-587185 has defined MAC address 52:54:00:2c:08:6d in network mk-calico-587185
	I0229 19:19:12.479860   55255 main.go:141] libmachine: (calico-587185) DBG | unable to find current IP address of domain calico-587185 in network mk-calico-587185
	I0229 19:19:12.479888   55255 main.go:141] libmachine: (calico-587185) DBG | I0229 19:19:12.479818   55277 retry.go:31] will retry after 328.024741ms: waiting for machine to come up
	I0229 19:19:12.809230   55255 main.go:141] libmachine: (calico-587185) DBG | domain calico-587185 has defined MAC address 52:54:00:2c:08:6d in network mk-calico-587185
	I0229 19:19:12.809729   55255 main.go:141] libmachine: (calico-587185) DBG | unable to find current IP address of domain calico-587185 in network mk-calico-587185
	I0229 19:19:12.809761   55255 main.go:141] libmachine: (calico-587185) DBG | I0229 19:19:12.809678   55277 retry.go:31] will retry after 403.762752ms: waiting for machine to come up
	I0229 19:19:13.215320   55255 main.go:141] libmachine: (calico-587185) DBG | domain calico-587185 has defined MAC address 52:54:00:2c:08:6d in network mk-calico-587185
	I0229 19:19:13.215838   55255 main.go:141] libmachine: (calico-587185) DBG | unable to find current IP address of domain calico-587185 in network mk-calico-587185
	I0229 19:19:13.215867   55255 main.go:141] libmachine: (calico-587185) DBG | I0229 19:19:13.215799   55277 retry.go:31] will retry after 754.729156ms: waiting for machine to come up
	I0229 19:19:13.971712   55255 main.go:141] libmachine: (calico-587185) DBG | domain calico-587185 has defined MAC address 52:54:00:2c:08:6d in network mk-calico-587185
	I0229 19:19:13.972213   55255 main.go:141] libmachine: (calico-587185) DBG | unable to find current IP address of domain calico-587185 in network mk-calico-587185
	I0229 19:19:13.972239   55255 main.go:141] libmachine: (calico-587185) DBG | I0229 19:19:13.972163   55277 retry.go:31] will retry after 862.857178ms: waiting for machine to come up
	I0229 19:19:14.836262   55255 main.go:141] libmachine: (calico-587185) DBG | domain calico-587185 has defined MAC address 52:54:00:2c:08:6d in network mk-calico-587185
	I0229 19:19:14.836762   55255 main.go:141] libmachine: (calico-587185) DBG | unable to find current IP address of domain calico-587185 in network mk-calico-587185
	I0229 19:19:14.836780   55255 main.go:141] libmachine: (calico-587185) DBG | I0229 19:19:14.836732   55277 retry.go:31] will retry after 943.220387ms: waiting for machine to come up
	I0229 19:19:14.061514   53770 pod_ready.go:102] pod "coredns-5dd5756b68-tt7cg" in "kube-system" namespace has status "Ready":"False"
	I0229 19:19:16.064402   53770 pod_ready.go:102] pod "coredns-5dd5756b68-tt7cg" in "kube-system" namespace has status "Ready":"False"
	I0229 19:19:15.781233   55255 main.go:141] libmachine: (calico-587185) DBG | domain calico-587185 has defined MAC address 52:54:00:2c:08:6d in network mk-calico-587185
	I0229 19:19:15.781745   55255 main.go:141] libmachine: (calico-587185) DBG | unable to find current IP address of domain calico-587185 in network mk-calico-587185
	I0229 19:19:15.781779   55255 main.go:141] libmachine: (calico-587185) DBG | I0229 19:19:15.781714   55277 retry.go:31] will retry after 1.400162418s: waiting for machine to come up
	I0229 19:19:17.184384   55255 main.go:141] libmachine: (calico-587185) DBG | domain calico-587185 has defined MAC address 52:54:00:2c:08:6d in network mk-calico-587185
	I0229 19:19:17.184819   55255 main.go:141] libmachine: (calico-587185) DBG | unable to find current IP address of domain calico-587185 in network mk-calico-587185
	I0229 19:19:17.184847   55255 main.go:141] libmachine: (calico-587185) DBG | I0229 19:19:17.184776   55277 retry.go:31] will retry after 1.704850543s: waiting for machine to come up
	I0229 19:19:18.891419   55255 main.go:141] libmachine: (calico-587185) DBG | domain calico-587185 has defined MAC address 52:54:00:2c:08:6d in network mk-calico-587185
	I0229 19:19:18.891991   55255 main.go:141] libmachine: (calico-587185) DBG | unable to find current IP address of domain calico-587185 in network mk-calico-587185
	I0229 19:19:18.892021   55255 main.go:141] libmachine: (calico-587185) DBG | I0229 19:19:18.891941   55277 retry.go:31] will retry after 1.500880071s: waiting for machine to come up
	I0229 19:19:21.171084   54399 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0229 19:19:21.171175   54399 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 19:19:21.171297   54399 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 19:19:21.171415   54399 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 19:19:21.171530   54399 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 19:19:21.171624   54399 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 19:19:21.173225   54399 out.go:204]   - Generating certificates and keys ...
	I0229 19:19:21.173318   54399 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 19:19:21.173401   54399 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 19:19:21.173486   54399 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0229 19:19:21.173554   54399 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0229 19:19:21.173635   54399 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0229 19:19:21.173705   54399 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0229 19:19:21.173781   54399 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0229 19:19:21.173947   54399 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [kindnet-587185 localhost] and IPs [192.168.61.15 127.0.0.1 ::1]
	I0229 19:19:21.174042   54399 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0229 19:19:21.174227   54399 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [kindnet-587185 localhost] and IPs [192.168.61.15 127.0.0.1 ::1]
	I0229 19:19:21.174317   54399 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0229 19:19:21.174402   54399 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0229 19:19:21.174459   54399 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0229 19:19:21.174530   54399 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 19:19:21.174595   54399 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 19:19:21.174660   54399 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 19:19:21.174742   54399 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 19:19:21.174825   54399 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 19:19:21.174934   54399 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 19:19:21.175035   54399 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 19:19:21.176420   54399 out.go:204]   - Booting up control plane ...
	I0229 19:19:21.176535   54399 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 19:19:21.176638   54399 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 19:19:21.176721   54399 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 19:19:21.176850   54399 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 19:19:21.176962   54399 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 19:19:21.177016   54399 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0229 19:19:21.177240   54399 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 19:19:21.177348   54399 kubeadm.go:322] [apiclient] All control plane components are healthy after 6.503236 seconds
	I0229 19:19:21.177496   54399 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0229 19:19:21.177659   54399 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0229 19:19:21.177732   54399 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0229 19:19:21.177957   54399 kubeadm.go:322] [mark-control-plane] Marking the node kindnet-587185 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0229 19:19:21.178030   54399 kubeadm.go:322] [bootstrap-token] Using token: z82nz0.379zrvno760ddotn
	I0229 19:19:21.179648   54399 out.go:204]   - Configuring RBAC rules ...
	I0229 19:19:21.179787   54399 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0229 19:19:21.179915   54399 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0229 19:19:21.180121   54399 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0229 19:19:21.180280   54399 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0229 19:19:21.180442   54399 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0229 19:19:21.180567   54399 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0229 19:19:21.180718   54399 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0229 19:19:21.180776   54399 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0229 19:19:21.180836   54399 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0229 19:19:21.180846   54399 kubeadm.go:322] 
	I0229 19:19:21.180913   54399 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0229 19:19:21.180920   54399 kubeadm.go:322] 
	I0229 19:19:21.181023   54399 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0229 19:19:21.181035   54399 kubeadm.go:322] 
	I0229 19:19:21.181080   54399 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0229 19:19:21.181163   54399 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0229 19:19:21.181242   54399 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0229 19:19:21.181275   54399 kubeadm.go:322] 
	I0229 19:19:21.181339   54399 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0229 19:19:21.181348   54399 kubeadm.go:322] 
	I0229 19:19:21.181422   54399 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0229 19:19:21.181438   54399 kubeadm.go:322] 
	I0229 19:19:21.181506   54399 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0229 19:19:21.181588   54399 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0229 19:19:21.181690   54399 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0229 19:19:21.181699   54399 kubeadm.go:322] 
	I0229 19:19:21.181802   54399 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0229 19:19:21.181920   54399 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0229 19:19:21.181936   54399 kubeadm.go:322] 
	I0229 19:19:21.182029   54399 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token z82nz0.379zrvno760ddotn \
	I0229 19:19:21.182110   54399 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:aeecb2c9da879b7c90530698bac219eb9b0e3de8de59b6b65ba867f984e81782 \
	I0229 19:19:21.182128   54399 kubeadm.go:322] 	--control-plane 
	I0229 19:19:21.182132   54399 kubeadm.go:322] 
	I0229 19:19:21.182194   54399 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0229 19:19:21.182201   54399 kubeadm.go:322] 
	I0229 19:19:21.182261   54399 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token z82nz0.379zrvno760ddotn \
	I0229 19:19:21.182377   54399 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:aeecb2c9da879b7c90530698bac219eb9b0e3de8de59b6b65ba867f984e81782 
	I0229 19:19:21.182406   54399 cni.go:84] Creating CNI manager for "kindnet"
	I0229 19:19:21.183875   54399 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0229 19:19:18.064820   53770 pod_ready.go:102] pod "coredns-5dd5756b68-tt7cg" in "kube-system" namespace has status "Ready":"False"
	I0229 19:19:20.064957   53770 pod_ready.go:102] pod "coredns-5dd5756b68-tt7cg" in "kube-system" namespace has status "Ready":"False"
	I0229 19:19:21.185088   54399 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0229 19:19:21.206490   54399 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0229 19:19:21.206512   54399 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0229 19:19:21.308559   54399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0229 19:19:22.464483   54399 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.155884485s)
	I0229 19:19:22.464530   54399 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 19:19:22.464656   54399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:19:22.464753   54399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f9e09574fdbf0719156a1e892f7aeb8b71f0cf19 minikube.k8s.io/name=kindnet-587185 minikube.k8s.io/updated_at=2024_02_29T19_19_22_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:19:22.537227   54399 ops.go:34] apiserver oom_adj: -16
	I0229 19:19:22.669413   54399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:19:23.170306   54399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:19:23.670227   54399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:19:24.169922   54399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:19:20.394186   55255 main.go:141] libmachine: (calico-587185) DBG | domain calico-587185 has defined MAC address 52:54:00:2c:08:6d in network mk-calico-587185
	I0229 19:19:20.394713   55255 main.go:141] libmachine: (calico-587185) DBG | unable to find current IP address of domain calico-587185 in network mk-calico-587185
	I0229 19:19:20.394734   55255 main.go:141] libmachine: (calico-587185) DBG | I0229 19:19:20.394680   55277 retry.go:31] will retry after 1.944398369s: waiting for machine to come up
	I0229 19:19:22.341343   55255 main.go:141] libmachine: (calico-587185) DBG | domain calico-587185 has defined MAC address 52:54:00:2c:08:6d in network mk-calico-587185
	I0229 19:19:22.341902   55255 main.go:141] libmachine: (calico-587185) DBG | unable to find current IP address of domain calico-587185 in network mk-calico-587185
	I0229 19:19:22.341927   55255 main.go:141] libmachine: (calico-587185) DBG | I0229 19:19:22.341868   55277 retry.go:31] will retry after 3.150000444s: waiting for machine to come up
	I0229 19:19:22.065454   53770 pod_ready.go:102] pod "coredns-5dd5756b68-tt7cg" in "kube-system" namespace has status "Ready":"False"
	I0229 19:19:24.559691   53770 pod_ready.go:102] pod "coredns-5dd5756b68-tt7cg" in "kube-system" namespace has status "Ready":"False"
	I0229 19:19:24.670112   54399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:19:25.170059   54399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:19:25.669503   54399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:19:26.170299   54399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:19:26.670071   54399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:19:27.170401   54399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:19:27.670281   54399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:19:28.170207   54399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:19:28.670358   54399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:19:29.169824   54399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:19:25.493644   55255 main.go:141] libmachine: (calico-587185) DBG | domain calico-587185 has defined MAC address 52:54:00:2c:08:6d in network mk-calico-587185
	I0229 19:19:25.494085   55255 main.go:141] libmachine: (calico-587185) DBG | unable to find current IP address of domain calico-587185 in network mk-calico-587185
	I0229 19:19:25.494114   55255 main.go:141] libmachine: (calico-587185) DBG | I0229 19:19:25.494037   55277 retry.go:31] will retry after 4.509708698s: waiting for machine to come up
	I0229 19:19:27.062250   53770 pod_ready.go:102] pod "coredns-5dd5756b68-tt7cg" in "kube-system" namespace has status "Ready":"False"
	I0229 19:19:29.559287   53770 pod_ready.go:102] pod "coredns-5dd5756b68-tt7cg" in "kube-system" namespace has status "Ready":"False"
	I0229 19:19:30.559295   53770 pod_ready.go:92] pod "coredns-5dd5756b68-tt7cg" in "kube-system" namespace has status "Ready":"True"
	I0229 19:19:30.559316   53770 pod_ready.go:81] duration metric: took 39.007677921s waiting for pod "coredns-5dd5756b68-tt7cg" in "kube-system" namespace to be "Ready" ...
	I0229 19:19:30.559327   53770 pod_ready.go:78] waiting up to 15m0s for pod "etcd-auto-587185" in "kube-system" namespace to be "Ready" ...
	I0229 19:19:30.565001   53770 pod_ready.go:92] pod "etcd-auto-587185" in "kube-system" namespace has status "Ready":"True"
	I0229 19:19:30.565020   53770 pod_ready.go:81] duration metric: took 5.604655ms waiting for pod "etcd-auto-587185" in "kube-system" namespace to be "Ready" ...
	I0229 19:19:30.565030   53770 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-auto-587185" in "kube-system" namespace to be "Ready" ...
	I0229 19:19:30.570943   53770 pod_ready.go:92] pod "kube-apiserver-auto-587185" in "kube-system" namespace has status "Ready":"True"
	I0229 19:19:30.570964   53770 pod_ready.go:81] duration metric: took 5.927236ms waiting for pod "kube-apiserver-auto-587185" in "kube-system" namespace to be "Ready" ...
	I0229 19:19:30.570976   53770 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-auto-587185" in "kube-system" namespace to be "Ready" ...
	I0229 19:19:30.577231   53770 pod_ready.go:92] pod "kube-controller-manager-auto-587185" in "kube-system" namespace has status "Ready":"True"
	I0229 19:19:30.577261   53770 pod_ready.go:81] duration metric: took 6.274619ms waiting for pod "kube-controller-manager-auto-587185" in "kube-system" namespace to be "Ready" ...
	I0229 19:19:30.577274   53770 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-frd4j" in "kube-system" namespace to be "Ready" ...
	I0229 19:19:30.582770   53770 pod_ready.go:92] pod "kube-proxy-frd4j" in "kube-system" namespace has status "Ready":"True"
	I0229 19:19:30.582799   53770 pod_ready.go:81] duration metric: took 5.51633ms waiting for pod "kube-proxy-frd4j" in "kube-system" namespace to be "Ready" ...
	I0229 19:19:30.582815   53770 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-auto-587185" in "kube-system" namespace to be "Ready" ...
	I0229 19:19:30.956348   53770 pod_ready.go:92] pod "kube-scheduler-auto-587185" in "kube-system" namespace has status "Ready":"True"
	I0229 19:19:30.956372   53770 pod_ready.go:81] duration metric: took 373.54926ms waiting for pod "kube-scheduler-auto-587185" in "kube-system" namespace to be "Ready" ...
	I0229 19:19:30.956383   53770 pod_ready.go:38] duration metric: took 39.416074871s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 19:19:30.956398   53770 api_server.go:52] waiting for apiserver process to appear ...
	I0229 19:19:30.956453   53770 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:19:30.981977   53770 api_server.go:72] duration metric: took 40.939667326s to wait for apiserver process to appear ...
	I0229 19:19:30.982007   53770 api_server.go:88] waiting for apiserver healthz status ...
	I0229 19:19:30.982029   53770 api_server.go:253] Checking apiserver healthz at https://192.168.50.111:8443/healthz ...
	I0229 19:19:30.987056   53770 api_server.go:279] https://192.168.50.111:8443/healthz returned 200:
	ok
	I0229 19:19:30.988425   53770 api_server.go:141] control plane version: v1.28.4
	I0229 19:19:30.988452   53770 api_server.go:131] duration metric: took 6.436888ms to wait for apiserver health ...
	I0229 19:19:30.988473   53770 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 19:19:31.159376   53770 system_pods.go:59] 7 kube-system pods found
	I0229 19:19:31.159411   53770 system_pods.go:61] "coredns-5dd5756b68-tt7cg" [c4e1975a-f568-4e38-ba2a-257135dcacd2] Running
	I0229 19:19:31.159417   53770 system_pods.go:61] "etcd-auto-587185" [84851269-1795-47a0-8b5d-d43be58ccfbd] Running
	I0229 19:19:31.159420   53770 system_pods.go:61] "kube-apiserver-auto-587185" [46f0454a-0ae4-4723-9001-7e902394550f] Running
	I0229 19:19:31.159424   53770 system_pods.go:61] "kube-controller-manager-auto-587185" [624fc24b-9894-4eb2-b505-bd83de3dc5fa] Running
	I0229 19:19:31.159426   53770 system_pods.go:61] "kube-proxy-frd4j" [2bc17836-0200-48d0-865c-c07ed5ab7d90] Running
	I0229 19:19:31.159429   53770 system_pods.go:61] "kube-scheduler-auto-587185" [447228c6-824b-4194-b8b2-708ee6acb1fb] Running
	I0229 19:19:31.159432   53770 system_pods.go:61] "storage-provisioner" [88aa25be-954f-4fc8-8567-29db7510f2ed] Running
	I0229 19:19:31.159442   53770 system_pods.go:74] duration metric: took 170.962533ms to wait for pod list to return data ...
	I0229 19:19:31.159449   53770 default_sa.go:34] waiting for default service account to be created ...
	I0229 19:19:31.356887   53770 default_sa.go:45] found service account: "default"
	I0229 19:19:31.356911   53770 default_sa.go:55] duration metric: took 197.456522ms for default service account to be created ...
	I0229 19:19:31.356920   53770 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 19:19:31.557893   53770 system_pods.go:86] 7 kube-system pods found
	I0229 19:19:31.557923   53770 system_pods.go:89] "coredns-5dd5756b68-tt7cg" [c4e1975a-f568-4e38-ba2a-257135dcacd2] Running
	I0229 19:19:31.557931   53770 system_pods.go:89] "etcd-auto-587185" [84851269-1795-47a0-8b5d-d43be58ccfbd] Running
	I0229 19:19:31.557937   53770 system_pods.go:89] "kube-apiserver-auto-587185" [46f0454a-0ae4-4723-9001-7e902394550f] Running
	I0229 19:19:31.557943   53770 system_pods.go:89] "kube-controller-manager-auto-587185" [624fc24b-9894-4eb2-b505-bd83de3dc5fa] Running
	I0229 19:19:31.557948   53770 system_pods.go:89] "kube-proxy-frd4j" [2bc17836-0200-48d0-865c-c07ed5ab7d90] Running
	I0229 19:19:31.557959   53770 system_pods.go:89] "kube-scheduler-auto-587185" [447228c6-824b-4194-b8b2-708ee6acb1fb] Running
	I0229 19:19:31.557966   53770 system_pods.go:89] "storage-provisioner" [88aa25be-954f-4fc8-8567-29db7510f2ed] Running
	I0229 19:19:31.557975   53770 system_pods.go:126] duration metric: took 201.049508ms to wait for k8s-apps to be running ...
	I0229 19:19:31.557990   53770 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 19:19:31.558042   53770 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 19:19:31.576332   53770 system_svc.go:56] duration metric: took 18.335255ms WaitForService to wait for kubelet.
	I0229 19:19:31.576365   53770 kubeadm.go:581] duration metric: took 41.534062309s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 19:19:31.576381   53770 node_conditions.go:102] verifying NodePressure condition ...
	I0229 19:19:31.757092   53770 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 19:19:31.757120   53770 node_conditions.go:123] node cpu capacity is 2
	I0229 19:19:31.757131   53770 node_conditions.go:105] duration metric: took 180.745753ms to run NodePressure ...
	I0229 19:19:31.757142   53770 start.go:228] waiting for startup goroutines ...
	I0229 19:19:31.757147   53770 start.go:233] waiting for cluster config update ...
	I0229 19:19:31.757156   53770 start.go:242] writing updated cluster config ...
	I0229 19:19:31.757386   53770 ssh_runner.go:195] Run: rm -f paused
	I0229 19:19:31.807269   53770 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0229 19:19:31.809566   53770 out.go:177] * Done! kubectl is now configured to use "auto-587185" cluster and "default" namespace by default
	I0229 19:19:29.670238   54399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:19:30.169502   54399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:19:30.669511   54399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:19:31.169544   54399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:19:31.670067   54399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:19:32.170566   54399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:19:32.669811   54399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:19:33.170145   54399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:19:33.392409   54399 kubeadm.go:1088] duration metric: took 10.927805451s to wait for elevateKubeSystemPrivileges.
	I0229 19:19:33.392447   54399 kubeadm.go:406] StartCluster complete in 23.227694661s
	I0229 19:19:33.392469   54399 settings.go:142] acquiring lock: {Name:mk2120f70b8c0f8e9d58905a579415af500b3723 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:19:33.392583   54399 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18259-6428/kubeconfig
	I0229 19:19:33.394347   54399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/kubeconfig: {Name:mk7125f243525b7f0feb85371d9c568ed8e0cf7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:19:33.394585   54399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 19:19:33.394722   54399 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 19:19:33.394793   54399 addons.go:69] Setting storage-provisioner=true in profile "kindnet-587185"
	I0229 19:19:33.394815   54399 addons.go:234] Setting addon storage-provisioner=true in "kindnet-587185"
	I0229 19:19:33.394826   54399 config.go:182] Loaded profile config "kindnet-587185": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 19:19:33.394859   54399 host.go:66] Checking if "kindnet-587185" exists ...
	I0229 19:19:33.394887   54399 addons.go:69] Setting default-storageclass=true in profile "kindnet-587185"
	I0229 19:19:33.394902   54399 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-587185"
	I0229 19:19:33.395316   54399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:19:33.395338   54399 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:19:33.395338   54399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:19:33.395359   54399 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:19:33.415083   54399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33181
	I0229 19:19:33.415310   54399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38291
	I0229 19:19:33.415590   54399 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:19:33.415693   54399 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:19:33.416070   54399 main.go:141] libmachine: Using API Version  1
	I0229 19:19:33.416090   54399 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:19:33.416251   54399 main.go:141] libmachine: Using API Version  1
	I0229 19:19:33.416269   54399 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:19:33.416417   54399 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:19:33.416603   54399 main.go:141] libmachine: (kindnet-587185) Calling .GetState
	I0229 19:19:33.416660   54399 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:19:33.417242   54399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:19:33.417282   54399 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:19:33.419802   54399 addons.go:234] Setting addon default-storageclass=true in "kindnet-587185"
	I0229 19:19:33.419844   54399 host.go:66] Checking if "kindnet-587185" exists ...
	I0229 19:19:33.420295   54399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:19:33.420349   54399 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:19:33.443171   54399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32889
	I0229 19:19:33.443175   54399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42123
	I0229 19:19:33.443714   54399 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:19:33.443821   54399 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:19:33.444350   54399 main.go:141] libmachine: Using API Version  1
	I0229 19:19:33.444359   54399 main.go:141] libmachine: Using API Version  1
	I0229 19:19:33.444369   54399 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:19:33.444376   54399 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:19:33.444735   54399 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:19:33.444821   54399 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:19:33.445513   54399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:19:33.445541   54399 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:19:33.447125   54399 main.go:141] libmachine: (kindnet-587185) Calling .GetState
	I0229 19:19:33.449231   54399 main.go:141] libmachine: (kindnet-587185) Calling .DriverName
	I0229 19:19:33.451457   54399 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 19:19:33.453362   54399 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 19:19:33.453387   54399 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 19:19:33.453411   54399 main.go:141] libmachine: (kindnet-587185) Calling .GetSSHHostname
	I0229 19:19:33.458013   54399 main.go:141] libmachine: (kindnet-587185) DBG | domain kindnet-587185 has defined MAC address 52:54:00:51:22:82 in network mk-kindnet-587185
	I0229 19:19:33.458448   54399 main.go:141] libmachine: (kindnet-587185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:22:82", ip: ""} in network mk-kindnet-587185: {Iface:virbr1 ExpiryTime:2024-02-29 20:18:53 +0000 UTC Type:0 Mac:52:54:00:51:22:82 Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:kindnet-587185 Clientid:01:52:54:00:51:22:82}
	I0229 19:19:33.458468   54399 main.go:141] libmachine: (kindnet-587185) DBG | domain kindnet-587185 has defined IP address 192.168.61.15 and MAC address 52:54:00:51:22:82 in network mk-kindnet-587185
	I0229 19:19:33.458730   54399 main.go:141] libmachine: (kindnet-587185) Calling .GetSSHPort
	I0229 19:19:33.458911   54399 main.go:141] libmachine: (kindnet-587185) Calling .GetSSHKeyPath
	I0229 19:19:33.459050   54399 main.go:141] libmachine: (kindnet-587185) Calling .GetSSHUsername
	I0229 19:19:33.459161   54399 sshutil.go:53] new ssh client: &{IP:192.168.61.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/kindnet-587185/id_rsa Username:docker}
	I0229 19:19:33.464889   54399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33393
	I0229 19:19:33.465434   54399 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:19:33.465995   54399 main.go:141] libmachine: Using API Version  1
	I0229 19:19:33.466016   54399 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:19:33.466568   54399 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:19:33.469393   54399 main.go:141] libmachine: (kindnet-587185) Calling .GetState
	I0229 19:19:33.471154   54399 main.go:141] libmachine: (kindnet-587185) Calling .DriverName
	I0229 19:19:33.471498   54399 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 19:19:33.471516   54399 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 19:19:33.471536   54399 main.go:141] libmachine: (kindnet-587185) Calling .GetSSHHostname
	I0229 19:19:33.474424   54399 main.go:141] libmachine: (kindnet-587185) DBG | domain kindnet-587185 has defined MAC address 52:54:00:51:22:82 in network mk-kindnet-587185
	I0229 19:19:33.474775   54399 main.go:141] libmachine: (kindnet-587185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:22:82", ip: ""} in network mk-kindnet-587185: {Iface:virbr1 ExpiryTime:2024-02-29 20:18:53 +0000 UTC Type:0 Mac:52:54:00:51:22:82 Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:kindnet-587185 Clientid:01:52:54:00:51:22:82}
	I0229 19:19:33.474797   54399 main.go:141] libmachine: (kindnet-587185) DBG | domain kindnet-587185 has defined IP address 192.168.61.15 and MAC address 52:54:00:51:22:82 in network mk-kindnet-587185
	I0229 19:19:33.474964   54399 main.go:141] libmachine: (kindnet-587185) Calling .GetSSHPort
	I0229 19:19:33.475144   54399 main.go:141] libmachine: (kindnet-587185) Calling .GetSSHKeyPath
	I0229 19:19:33.475264   54399 main.go:141] libmachine: (kindnet-587185) Calling .GetSSHUsername
	I0229 19:19:33.475391   54399 sshutil.go:53] new ssh client: &{IP:192.168.61.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/kindnet-587185/id_rsa Username:docker}
	I0229 19:19:33.716771   54399 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 19:19:33.780638   54399 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 19:19:33.815097   54399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0229 19:19:33.935707   54399 kapi.go:248] "coredns" deployment in "kube-system" namespace and "kindnet-587185" context rescaled to 1 replicas
	I0229 19:19:33.935747   54399 start.go:223] Will wait 15m0s for node &{Name: IP:192.168.61.15 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0229 19:19:33.937597   54399 out.go:177] * Verifying Kubernetes components...
	I0229 19:19:33.939045   54399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 19:19:30.008019   55255 main.go:141] libmachine: (calico-587185) DBG | domain calico-587185 has defined MAC address 52:54:00:2c:08:6d in network mk-calico-587185
	I0229 19:19:30.008559   55255 main.go:141] libmachine: (calico-587185) DBG | unable to find current IP address of domain calico-587185 in network mk-calico-587185
	I0229 19:19:30.008584   55255 main.go:141] libmachine: (calico-587185) DBG | I0229 19:19:30.008519   55277 retry.go:31] will retry after 3.546818542s: waiting for machine to come up
	I0229 19:19:33.557562   55255 main.go:141] libmachine: (calico-587185) DBG | domain calico-587185 has defined MAC address 52:54:00:2c:08:6d in network mk-calico-587185
	I0229 19:19:33.558140   55255 main.go:141] libmachine: (calico-587185) Found IP for machine: 192.168.72.73
	I0229 19:19:33.558172   55255 main.go:141] libmachine: (calico-587185) DBG | domain calico-587185 has current primary IP address 192.168.72.73 and MAC address 52:54:00:2c:08:6d in network mk-calico-587185
	I0229 19:19:33.558187   55255 main.go:141] libmachine: (calico-587185) Reserving static IP address...
	I0229 19:19:33.558560   55255 main.go:141] libmachine: (calico-587185) DBG | unable to find host DHCP lease matching {name: "calico-587185", mac: "52:54:00:2c:08:6d", ip: "192.168.72.73"} in network mk-calico-587185
	I0229 19:19:33.662708   55255 main.go:141] libmachine: (calico-587185) Reserved static IP address: 192.168.72.73
	I0229 19:19:33.662739   55255 main.go:141] libmachine: (calico-587185) Waiting for SSH to be available...
	I0229 19:19:33.662762   55255 main.go:141] libmachine: (calico-587185) DBG | Getting to WaitForSSH function...
	I0229 19:19:33.665991   55255 main.go:141] libmachine: (calico-587185) DBG | domain calico-587185 has defined MAC address 52:54:00:2c:08:6d in network mk-calico-587185
	I0229 19:19:33.666470   55255 main.go:141] libmachine: (calico-587185) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:2c:08:6d", ip: ""} in network mk-calico-587185
	I0229 19:19:33.666490   55255 main.go:141] libmachine: (calico-587185) DBG | unable to find defined IP address of network mk-calico-587185 interface with MAC address 52:54:00:2c:08:6d
	I0229 19:19:33.666906   55255 main.go:141] libmachine: (calico-587185) DBG | Using SSH client type: external
	I0229 19:19:33.666925   55255 main.go:141] libmachine: (calico-587185) DBG | Using SSH private key: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/calico-587185/id_rsa (-rw-------)
	I0229 19:19:33.666965   55255 main.go:141] libmachine: (calico-587185) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18259-6428/.minikube/machines/calico-587185/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 19:19:33.666975   55255 main.go:141] libmachine: (calico-587185) DBG | About to run SSH command:
	I0229 19:19:33.666987   55255 main.go:141] libmachine: (calico-587185) DBG | exit 0
	I0229 19:19:33.671608   55255 main.go:141] libmachine: (calico-587185) DBG | SSH cmd err, output: exit status 255: 
	I0229 19:19:33.671635   55255 main.go:141] libmachine: (calico-587185) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0229 19:19:33.671645   55255 main.go:141] libmachine: (calico-587185) DBG | command : exit 0
	I0229 19:19:33.671655   55255 main.go:141] libmachine: (calico-587185) DBG | err     : exit status 255
	I0229 19:19:33.671665   55255 main.go:141] libmachine: (calico-587185) DBG | output  : 
	I0229 19:19:35.080661   54399 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.299987897s)
	I0229 19:19:35.080717   54399 main.go:141] libmachine: Making call to close driver server
	I0229 19:19:35.080729   54399 main.go:141] libmachine: (kindnet-587185) Calling .Close
	I0229 19:19:35.080726   54399 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.265590258s)
	I0229 19:19:35.080755   54399 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.14168527s)
	I0229 19:19:35.080754   54399 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0229 19:19:35.081207   54399 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.364114322s)
	I0229 19:19:35.081266   54399 main.go:141] libmachine: Making call to close driver server
	I0229 19:19:35.081293   54399 main.go:141] libmachine: (kindnet-587185) Calling .Close
	I0229 19:19:35.085109   54399 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:19:35.085279   54399 main.go:141] libmachine: (kindnet-587185) DBG | Closing plugin on server side
	I0229 19:19:35.085296   54399 main.go:141] libmachine: (kindnet-587185) DBG | Closing plugin on server side
	I0229 19:19:35.085282   54399 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:19:35.085323   54399 main.go:141] libmachine: Making call to close driver server
	I0229 19:19:35.085333   54399 main.go:141] libmachine: (kindnet-587185) Calling .Close
	I0229 19:19:35.085436   54399 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:19:35.085465   54399 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:19:35.085476   54399 main.go:141] libmachine: Making call to close driver server
	I0229 19:19:35.085486   54399 main.go:141] libmachine: (kindnet-587185) Calling .Close
	I0229 19:19:35.086540   54399 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:19:35.086554   54399 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:19:35.086636   54399 main.go:141] libmachine: (kindnet-587185) DBG | Closing plugin on server side
	I0229 19:19:35.086641   54399 main.go:141] libmachine: (kindnet-587185) DBG | Closing plugin on server side
	I0229 19:19:35.086694   54399 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:19:35.086724   54399 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:19:35.087562   54399 node_ready.go:35] waiting up to 15m0s for node "kindnet-587185" to be "Ready" ...
	I0229 19:19:35.094744   54399 main.go:141] libmachine: Making call to close driver server
	I0229 19:19:35.094766   54399 main.go:141] libmachine: (kindnet-587185) Calling .Close
	I0229 19:19:35.095010   54399 main.go:141] libmachine: (kindnet-587185) DBG | Closing plugin on server side
	I0229 19:19:35.095036   54399 main.go:141] libmachine: Successfully made call to close driver server
	I0229 19:19:35.095059   54399 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 19:19:35.096902   54399 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0229 19:19:35.098349   54399 addons.go:505] enable addons completed in 1.703633289s: enabled=[storage-provisioner default-storageclass]
	I0229 19:19:37.091757   54399 node_ready.go:58] node "kindnet-587185" has status "Ready":"False"
	I0229 19:19:39.092133   54399 node_ready.go:58] node "kindnet-587185" has status "Ready":"False"
	I0229 19:19:36.673048   55255 main.go:141] libmachine: (calico-587185) DBG | Getting to WaitForSSH function...
	I0229 19:19:36.675467   55255 main.go:141] libmachine: (calico-587185) DBG | domain calico-587185 has defined MAC address 52:54:00:2c:08:6d in network mk-calico-587185
	I0229 19:19:36.675843   55255 main.go:141] libmachine: (calico-587185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:08:6d", ip: ""} in network mk-calico-587185: {Iface:virbr4 ExpiryTime:2024-02-29 20:19:27 +0000 UTC Type:0 Mac:52:54:00:2c:08:6d Iaid: IPaddr:192.168.72.73 Prefix:24 Hostname:calico-587185 Clientid:01:52:54:00:2c:08:6d}
	I0229 19:19:36.675872   55255 main.go:141] libmachine: (calico-587185) DBG | domain calico-587185 has defined IP address 192.168.72.73 and MAC address 52:54:00:2c:08:6d in network mk-calico-587185
	I0229 19:19:36.675944   55255 main.go:141] libmachine: (calico-587185) DBG | Using SSH client type: external
	I0229 19:19:36.675986   55255 main.go:141] libmachine: (calico-587185) DBG | Using SSH private key: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/calico-587185/id_rsa (-rw-------)
	I0229 19:19:36.676032   55255 main.go:141] libmachine: (calico-587185) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.73 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18259-6428/.minikube/machines/calico-587185/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 19:19:36.676046   55255 main.go:141] libmachine: (calico-587185) DBG | About to run SSH command:
	I0229 19:19:36.676068   55255 main.go:141] libmachine: (calico-587185) DBG | exit 0
	I0229 19:19:36.799776   55255 main.go:141] libmachine: (calico-587185) DBG | SSH cmd err, output: <nil>: 
	I0229 19:19:36.800082   55255 main.go:141] libmachine: (calico-587185) KVM machine creation complete!
	I0229 19:19:36.800417   55255 main.go:141] libmachine: (calico-587185) Calling .GetConfigRaw
	I0229 19:19:36.800948   55255 main.go:141] libmachine: (calico-587185) Calling .DriverName
	I0229 19:19:36.801212   55255 main.go:141] libmachine: (calico-587185) Calling .DriverName
	I0229 19:19:36.801393   55255 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0229 19:19:36.801402   55255 main.go:141] libmachine: (calico-587185) Calling .GetState
	I0229 19:19:36.802717   55255 main.go:141] libmachine: Detecting operating system of created instance...
	I0229 19:19:36.802731   55255 main.go:141] libmachine: Waiting for SSH to be available...
	I0229 19:19:36.802738   55255 main.go:141] libmachine: Getting to WaitForSSH function...
	I0229 19:19:36.802746   55255 main.go:141] libmachine: (calico-587185) Calling .GetSSHHostname
	I0229 19:19:36.805372   55255 main.go:141] libmachine: (calico-587185) DBG | domain calico-587185 has defined MAC address 52:54:00:2c:08:6d in network mk-calico-587185
	I0229 19:19:36.805783   55255 main.go:141] libmachine: (calico-587185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:08:6d", ip: ""} in network mk-calico-587185: {Iface:virbr4 ExpiryTime:2024-02-29 20:19:27 +0000 UTC Type:0 Mac:52:54:00:2c:08:6d Iaid: IPaddr:192.168.72.73 Prefix:24 Hostname:calico-587185 Clientid:01:52:54:00:2c:08:6d}
	I0229 19:19:36.805797   55255 main.go:141] libmachine: (calico-587185) DBG | domain calico-587185 has defined IP address 192.168.72.73 and MAC address 52:54:00:2c:08:6d in network mk-calico-587185
	I0229 19:19:36.805962   55255 main.go:141] libmachine: (calico-587185) Calling .GetSSHPort
	I0229 19:19:36.806147   55255 main.go:141] libmachine: (calico-587185) Calling .GetSSHKeyPath
	I0229 19:19:36.806325   55255 main.go:141] libmachine: (calico-587185) Calling .GetSSHKeyPath
	I0229 19:19:36.806521   55255 main.go:141] libmachine: (calico-587185) Calling .GetSSHUsername
	I0229 19:19:36.806703   55255 main.go:141] libmachine: Using SSH client type: native
	I0229 19:19:36.806934   55255 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.73 22 <nil> <nil>}
	I0229 19:19:36.806953   55255 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0229 19:19:36.910735   55255 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 19:19:36.910760   55255 main.go:141] libmachine: Detecting the provisioner...
	I0229 19:19:36.910768   55255 main.go:141] libmachine: (calico-587185) Calling .GetSSHHostname
	I0229 19:19:36.913572   55255 main.go:141] libmachine: (calico-587185) DBG | domain calico-587185 has defined MAC address 52:54:00:2c:08:6d in network mk-calico-587185
	I0229 19:19:36.913897   55255 main.go:141] libmachine: (calico-587185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:08:6d", ip: ""} in network mk-calico-587185: {Iface:virbr4 ExpiryTime:2024-02-29 20:19:27 +0000 UTC Type:0 Mac:52:54:00:2c:08:6d Iaid: IPaddr:192.168.72.73 Prefix:24 Hostname:calico-587185 Clientid:01:52:54:00:2c:08:6d}
	I0229 19:19:36.913920   55255 main.go:141] libmachine: (calico-587185) DBG | domain calico-587185 has defined IP address 192.168.72.73 and MAC address 52:54:00:2c:08:6d in network mk-calico-587185
	I0229 19:19:36.914137   55255 main.go:141] libmachine: (calico-587185) Calling .GetSSHPort
	I0229 19:19:36.914358   55255 main.go:141] libmachine: (calico-587185) Calling .GetSSHKeyPath
	I0229 19:19:36.914523   55255 main.go:141] libmachine: (calico-587185) Calling .GetSSHKeyPath
	I0229 19:19:36.914733   55255 main.go:141] libmachine: (calico-587185) Calling .GetSSHUsername
	I0229 19:19:36.914934   55255 main.go:141] libmachine: Using SSH client type: native
	I0229 19:19:36.915184   55255 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.73 22 <nil> <nil>}
	I0229 19:19:36.915203   55255 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0229 19:19:37.020558   55255 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0229 19:19:37.020637   55255 main.go:141] libmachine: found compatible host: buildroot
	I0229 19:19:37.020651   55255 main.go:141] libmachine: Provisioning with buildroot...
	I0229 19:19:37.020670   55255 main.go:141] libmachine: (calico-587185) Calling .GetMachineName
	I0229 19:19:37.020935   55255 buildroot.go:166] provisioning hostname "calico-587185"
	I0229 19:19:37.020960   55255 main.go:141] libmachine: (calico-587185) Calling .GetMachineName
	I0229 19:19:37.021159   55255 main.go:141] libmachine: (calico-587185) Calling .GetSSHHostname
	I0229 19:19:37.024093   55255 main.go:141] libmachine: (calico-587185) DBG | domain calico-587185 has defined MAC address 52:54:00:2c:08:6d in network mk-calico-587185
	I0229 19:19:37.024394   55255 main.go:141] libmachine: (calico-587185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:08:6d", ip: ""} in network mk-calico-587185: {Iface:virbr4 ExpiryTime:2024-02-29 20:19:27 +0000 UTC Type:0 Mac:52:54:00:2c:08:6d Iaid: IPaddr:192.168.72.73 Prefix:24 Hostname:calico-587185 Clientid:01:52:54:00:2c:08:6d}
	I0229 19:19:37.024441   55255 main.go:141] libmachine: (calico-587185) DBG | domain calico-587185 has defined IP address 192.168.72.73 and MAC address 52:54:00:2c:08:6d in network mk-calico-587185
	I0229 19:19:37.024523   55255 main.go:141] libmachine: (calico-587185) Calling .GetSSHPort
	I0229 19:19:37.024710   55255 main.go:141] libmachine: (calico-587185) Calling .GetSSHKeyPath
	I0229 19:19:37.024878   55255 main.go:141] libmachine: (calico-587185) Calling .GetSSHKeyPath
	I0229 19:19:37.025004   55255 main.go:141] libmachine: (calico-587185) Calling .GetSSHUsername
	I0229 19:19:37.025160   55255 main.go:141] libmachine: Using SSH client type: native
	I0229 19:19:37.025314   55255 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.73 22 <nil> <nil>}
	I0229 19:19:37.025325   55255 main.go:141] libmachine: About to run SSH command:
	sudo hostname calico-587185 && echo "calico-587185" | sudo tee /etc/hostname
	I0229 19:19:37.146778   55255 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-587185
	
	I0229 19:19:37.146807   55255 main.go:141] libmachine: (calico-587185) Calling .GetSSHHostname
	I0229 19:19:37.149902   55255 main.go:141] libmachine: (calico-587185) DBG | domain calico-587185 has defined MAC address 52:54:00:2c:08:6d in network mk-calico-587185
	I0229 19:19:37.150204   55255 main.go:141] libmachine: (calico-587185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:08:6d", ip: ""} in network mk-calico-587185: {Iface:virbr4 ExpiryTime:2024-02-29 20:19:27 +0000 UTC Type:0 Mac:52:54:00:2c:08:6d Iaid: IPaddr:192.168.72.73 Prefix:24 Hostname:calico-587185 Clientid:01:52:54:00:2c:08:6d}
	I0229 19:19:37.150246   55255 main.go:141] libmachine: (calico-587185) DBG | domain calico-587185 has defined IP address 192.168.72.73 and MAC address 52:54:00:2c:08:6d in network mk-calico-587185
	I0229 19:19:37.150398   55255 main.go:141] libmachine: (calico-587185) Calling .GetSSHPort
	I0229 19:19:37.150589   55255 main.go:141] libmachine: (calico-587185) Calling .GetSSHKeyPath
	I0229 19:19:37.150762   55255 main.go:141] libmachine: (calico-587185) Calling .GetSSHKeyPath
	I0229 19:19:37.150935   55255 main.go:141] libmachine: (calico-587185) Calling .GetSSHUsername
	I0229 19:19:37.151110   55255 main.go:141] libmachine: Using SSH client type: native
	I0229 19:19:37.151289   55255 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.73 22 <nil> <nil>}
	I0229 19:19:37.151304   55255 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-587185' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-587185/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-587185' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 19:19:37.270209   55255 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 19:19:37.270240   55255 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18259-6428/.minikube CaCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18259-6428/.minikube}
	I0229 19:19:37.270273   55255 buildroot.go:174] setting up certificates
	I0229 19:19:37.270296   55255 provision.go:83] configureAuth start
	I0229 19:19:37.270318   55255 main.go:141] libmachine: (calico-587185) Calling .GetMachineName
	I0229 19:19:37.270610   55255 main.go:141] libmachine: (calico-587185) Calling .GetIP
	I0229 19:19:37.273619   55255 main.go:141] libmachine: (calico-587185) DBG | domain calico-587185 has defined MAC address 52:54:00:2c:08:6d in network mk-calico-587185
	I0229 19:19:37.273939   55255 main.go:141] libmachine: (calico-587185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:08:6d", ip: ""} in network mk-calico-587185: {Iface:virbr4 ExpiryTime:2024-02-29 20:19:27 +0000 UTC Type:0 Mac:52:54:00:2c:08:6d Iaid: IPaddr:192.168.72.73 Prefix:24 Hostname:calico-587185 Clientid:01:52:54:00:2c:08:6d}
	I0229 19:19:37.273999   55255 main.go:141] libmachine: (calico-587185) DBG | domain calico-587185 has defined IP address 192.168.72.73 and MAC address 52:54:00:2c:08:6d in network mk-calico-587185
	I0229 19:19:37.274286   55255 main.go:141] libmachine: (calico-587185) Calling .GetSSHHostname
	I0229 19:19:37.276695   55255 main.go:141] libmachine: (calico-587185) DBG | domain calico-587185 has defined MAC address 52:54:00:2c:08:6d in network mk-calico-587185
	I0229 19:19:37.277033   55255 main.go:141] libmachine: (calico-587185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:08:6d", ip: ""} in network mk-calico-587185: {Iface:virbr4 ExpiryTime:2024-02-29 20:19:27 +0000 UTC Type:0 Mac:52:54:00:2c:08:6d Iaid: IPaddr:192.168.72.73 Prefix:24 Hostname:calico-587185 Clientid:01:52:54:00:2c:08:6d}
	I0229 19:19:37.277055   55255 main.go:141] libmachine: (calico-587185) DBG | domain calico-587185 has defined IP address 192.168.72.73 and MAC address 52:54:00:2c:08:6d in network mk-calico-587185
	I0229 19:19:37.277281   55255 provision.go:138] copyHostCerts
	I0229 19:19:37.277336   55255 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem, removing ...
	I0229 19:19:37.277352   55255 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem
	I0229 19:19:37.277418   55255 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem (1082 bytes)
	I0229 19:19:37.277519   55255 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem, removing ...
	I0229 19:19:37.277527   55255 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem
	I0229 19:19:37.277551   55255 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem (1123 bytes)
	I0229 19:19:37.277623   55255 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem, removing ...
	I0229 19:19:37.277630   55255 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem
	I0229 19:19:37.277650   55255 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem (1675 bytes)
	I0229 19:19:37.277706   55255 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem org=jenkins.calico-587185 san=[192.168.72.73 192.168.72.73 localhost 127.0.0.1 minikube calico-587185]
	I0229 19:19:37.468654   55255 provision.go:172] copyRemoteCerts
	I0229 19:19:37.468707   55255 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 19:19:37.468729   55255 main.go:141] libmachine: (calico-587185) Calling .GetSSHHostname
	I0229 19:19:37.472992   55255 main.go:141] libmachine: (calico-587185) DBG | domain calico-587185 has defined MAC address 52:54:00:2c:08:6d in network mk-calico-587185
	I0229 19:19:37.473467   55255 main.go:141] libmachine: (calico-587185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:08:6d", ip: ""} in network mk-calico-587185: {Iface:virbr4 ExpiryTime:2024-02-29 20:19:27 +0000 UTC Type:0 Mac:52:54:00:2c:08:6d Iaid: IPaddr:192.168.72.73 Prefix:24 Hostname:calico-587185 Clientid:01:52:54:00:2c:08:6d}
	I0229 19:19:37.473514   55255 main.go:141] libmachine: (calico-587185) DBG | domain calico-587185 has defined IP address 192.168.72.73 and MAC address 52:54:00:2c:08:6d in network mk-calico-587185
	I0229 19:19:37.473737   55255 main.go:141] libmachine: (calico-587185) Calling .GetSSHPort
	I0229 19:19:37.473969   55255 main.go:141] libmachine: (calico-587185) Calling .GetSSHKeyPath
	I0229 19:19:37.474168   55255 main.go:141] libmachine: (calico-587185) Calling .GetSSHUsername
	I0229 19:19:37.474336   55255 sshutil.go:53] new ssh client: &{IP:192.168.72.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/calico-587185/id_rsa Username:docker}
	I0229 19:19:37.567116   55255 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 19:19:37.602413   55255 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0229 19:19:37.637859   55255 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 19:19:37.669488   55255 provision.go:86] duration metric: configureAuth took 399.171214ms
	I0229 19:19:37.669518   55255 buildroot.go:189] setting minikube options for container-runtime
	I0229 19:19:37.669719   55255 config.go:182] Loaded profile config "calico-587185": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 19:19:37.669811   55255 main.go:141] libmachine: (calico-587185) Calling .GetSSHHostname
	I0229 19:19:37.672600   55255 main.go:141] libmachine: (calico-587185) DBG | domain calico-587185 has defined MAC address 52:54:00:2c:08:6d in network mk-calico-587185
	I0229 19:19:37.672958   55255 main.go:141] libmachine: (calico-587185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:08:6d", ip: ""} in network mk-calico-587185: {Iface:virbr4 ExpiryTime:2024-02-29 20:19:27 +0000 UTC Type:0 Mac:52:54:00:2c:08:6d Iaid: IPaddr:192.168.72.73 Prefix:24 Hostname:calico-587185 Clientid:01:52:54:00:2c:08:6d}
	I0229 19:19:37.672989   55255 main.go:141] libmachine: (calico-587185) DBG | domain calico-587185 has defined IP address 192.168.72.73 and MAC address 52:54:00:2c:08:6d in network mk-calico-587185
	I0229 19:19:37.673232   55255 main.go:141] libmachine: (calico-587185) Calling .GetSSHPort
	I0229 19:19:37.673463   55255 main.go:141] libmachine: (calico-587185) Calling .GetSSHKeyPath
	I0229 19:19:37.673637   55255 main.go:141] libmachine: (calico-587185) Calling .GetSSHKeyPath
	I0229 19:19:37.673760   55255 main.go:141] libmachine: (calico-587185) Calling .GetSSHUsername
	I0229 19:19:37.673969   55255 main.go:141] libmachine: Using SSH client type: native
	I0229 19:19:37.674185   55255 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.73 22 <nil> <nil>}
	I0229 19:19:37.674207   55255 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0229 19:19:38.000657   55255 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0229 19:19:38.000702   55255 main.go:141] libmachine: Checking connection to Docker...
	I0229 19:19:38.000713   55255 main.go:141] libmachine: (calico-587185) Calling .GetURL
	I0229 19:19:38.002178   55255 main.go:141] libmachine: (calico-587185) DBG | Using libvirt version 6000000
	I0229 19:19:38.004638   55255 main.go:141] libmachine: (calico-587185) DBG | domain calico-587185 has defined MAC address 52:54:00:2c:08:6d in network mk-calico-587185
	I0229 19:19:38.005051   55255 main.go:141] libmachine: (calico-587185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:08:6d", ip: ""} in network mk-calico-587185: {Iface:virbr4 ExpiryTime:2024-02-29 20:19:27 +0000 UTC Type:0 Mac:52:54:00:2c:08:6d Iaid: IPaddr:192.168.72.73 Prefix:24 Hostname:calico-587185 Clientid:01:52:54:00:2c:08:6d}
	I0229 19:19:38.005078   55255 main.go:141] libmachine: (calico-587185) DBG | domain calico-587185 has defined IP address 192.168.72.73 and MAC address 52:54:00:2c:08:6d in network mk-calico-587185
	I0229 19:19:38.005243   55255 main.go:141] libmachine: Docker is up and running!
	I0229 19:19:38.005256   55255 main.go:141] libmachine: Reticulating splines...
	I0229 19:19:38.005262   55255 client.go:171] LocalClient.Create took 27.847311987s
	I0229 19:19:38.005286   55255 start.go:167] duration metric: libmachine.API.Create for "calico-587185" took 27.847382485s
	I0229 19:19:38.005295   55255 start.go:300] post-start starting for "calico-587185" (driver="kvm2")
	I0229 19:19:38.005307   55255 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 19:19:38.005322   55255 main.go:141] libmachine: (calico-587185) Calling .DriverName
	I0229 19:19:38.005560   55255 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 19:19:38.005580   55255 main.go:141] libmachine: (calico-587185) Calling .GetSSHHostname
	I0229 19:19:38.007998   55255 main.go:141] libmachine: (calico-587185) DBG | domain calico-587185 has defined MAC address 52:54:00:2c:08:6d in network mk-calico-587185
	I0229 19:19:38.008323   55255 main.go:141] libmachine: (calico-587185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:08:6d", ip: ""} in network mk-calico-587185: {Iface:virbr4 ExpiryTime:2024-02-29 20:19:27 +0000 UTC Type:0 Mac:52:54:00:2c:08:6d Iaid: IPaddr:192.168.72.73 Prefix:24 Hostname:calico-587185 Clientid:01:52:54:00:2c:08:6d}
	I0229 19:19:38.008351   55255 main.go:141] libmachine: (calico-587185) DBG | domain calico-587185 has defined IP address 192.168.72.73 and MAC address 52:54:00:2c:08:6d in network mk-calico-587185
	I0229 19:19:38.008558   55255 main.go:141] libmachine: (calico-587185) Calling .GetSSHPort
	I0229 19:19:38.008763   55255 main.go:141] libmachine: (calico-587185) Calling .GetSSHKeyPath
	I0229 19:19:38.008937   55255 main.go:141] libmachine: (calico-587185) Calling .GetSSHUsername
	I0229 19:19:38.009069   55255 sshutil.go:53] new ssh client: &{IP:192.168.72.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/calico-587185/id_rsa Username:docker}
	I0229 19:19:38.097032   55255 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 19:19:38.102168   55255 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 19:19:38.102197   55255 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6428/.minikube/addons for local assets ...
	I0229 19:19:38.102281   55255 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6428/.minikube/files for local assets ...
	I0229 19:19:38.102379   55255 filesync.go:149] local asset: /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem -> 136512.pem in /etc/ssl/certs
	I0229 19:19:38.102497   55255 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 19:19:38.114567   55255 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem --> /etc/ssl/certs/136512.pem (1708 bytes)
	I0229 19:19:38.145940   55255 start.go:303] post-start completed in 140.628428ms
	I0229 19:19:38.146005   55255 main.go:141] libmachine: (calico-587185) Calling .GetConfigRaw
	I0229 19:19:38.146642   55255 main.go:141] libmachine: (calico-587185) Calling .GetIP
	I0229 19:19:38.149640   55255 main.go:141] libmachine: (calico-587185) DBG | domain calico-587185 has defined MAC address 52:54:00:2c:08:6d in network mk-calico-587185
	I0229 19:19:38.150012   55255 main.go:141] libmachine: (calico-587185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:08:6d", ip: ""} in network mk-calico-587185: {Iface:virbr4 ExpiryTime:2024-02-29 20:19:27 +0000 UTC Type:0 Mac:52:54:00:2c:08:6d Iaid: IPaddr:192.168.72.73 Prefix:24 Hostname:calico-587185 Clientid:01:52:54:00:2c:08:6d}
	I0229 19:19:38.150036   55255 main.go:141] libmachine: (calico-587185) DBG | domain calico-587185 has defined IP address 192.168.72.73 and MAC address 52:54:00:2c:08:6d in network mk-calico-587185
	I0229 19:19:38.150311   55255 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/calico-587185/config.json ...
	I0229 19:19:38.150501   55255 start.go:128] duration metric: createHost completed in 28.011829333s
	I0229 19:19:38.150527   55255 main.go:141] libmachine: (calico-587185) Calling .GetSSHHostname
	I0229 19:19:38.152942   55255 main.go:141] libmachine: (calico-587185) DBG | domain calico-587185 has defined MAC address 52:54:00:2c:08:6d in network mk-calico-587185
	I0229 19:19:38.153287   55255 main.go:141] libmachine: (calico-587185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:08:6d", ip: ""} in network mk-calico-587185: {Iface:virbr4 ExpiryTime:2024-02-29 20:19:27 +0000 UTC Type:0 Mac:52:54:00:2c:08:6d Iaid: IPaddr:192.168.72.73 Prefix:24 Hostname:calico-587185 Clientid:01:52:54:00:2c:08:6d}
	I0229 19:19:38.153313   55255 main.go:141] libmachine: (calico-587185) DBG | domain calico-587185 has defined IP address 192.168.72.73 and MAC address 52:54:00:2c:08:6d in network mk-calico-587185
	I0229 19:19:38.153535   55255 main.go:141] libmachine: (calico-587185) Calling .GetSSHPort
	I0229 19:19:38.153727   55255 main.go:141] libmachine: (calico-587185) Calling .GetSSHKeyPath
	I0229 19:19:38.153857   55255 main.go:141] libmachine: (calico-587185) Calling .GetSSHKeyPath
	I0229 19:19:38.154025   55255 main.go:141] libmachine: (calico-587185) Calling .GetSSHUsername
	I0229 19:19:38.154222   55255 main.go:141] libmachine: Using SSH client type: native
	I0229 19:19:38.154387   55255 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.73 22 <nil> <nil>}
	I0229 19:19:38.154401   55255 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 19:19:38.264830   55255 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709234378.250539663
	
	I0229 19:19:38.264875   55255 fix.go:206] guest clock: 1709234378.250539663
	I0229 19:19:38.264885   55255 fix.go:219] Guest: 2024-02-29 19:19:38.250539663 +0000 UTC Remote: 2024-02-29 19:19:38.150515267 +0000 UTC m=+28.280761767 (delta=100.024396ms)
	I0229 19:19:38.264910   55255 fix.go:190] guest clock delta is within tolerance: 100.024396ms
	I0229 19:19:38.264920   55255 start.go:83] releasing machines lock for "calico-587185", held for 28.126353076s
	I0229 19:19:38.264937   55255 main.go:141] libmachine: (calico-587185) Calling .DriverName
	I0229 19:19:38.265182   55255 main.go:141] libmachine: (calico-587185) Calling .GetIP
	I0229 19:19:38.268283   55255 main.go:141] libmachine: (calico-587185) DBG | domain calico-587185 has defined MAC address 52:54:00:2c:08:6d in network mk-calico-587185
	I0229 19:19:38.268667   55255 main.go:141] libmachine: (calico-587185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:08:6d", ip: ""} in network mk-calico-587185: {Iface:virbr4 ExpiryTime:2024-02-29 20:19:27 +0000 UTC Type:0 Mac:52:54:00:2c:08:6d Iaid: IPaddr:192.168.72.73 Prefix:24 Hostname:calico-587185 Clientid:01:52:54:00:2c:08:6d}
	I0229 19:19:38.268690   55255 main.go:141] libmachine: (calico-587185) DBG | domain calico-587185 has defined IP address 192.168.72.73 and MAC address 52:54:00:2c:08:6d in network mk-calico-587185
	I0229 19:19:38.268914   55255 main.go:141] libmachine: (calico-587185) Calling .DriverName
	I0229 19:19:38.269605   55255 main.go:141] libmachine: (calico-587185) Calling .DriverName
	I0229 19:19:38.269832   55255 main.go:141] libmachine: (calico-587185) Calling .DriverName
	I0229 19:19:38.269941   55255 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 19:19:38.269981   55255 main.go:141] libmachine: (calico-587185) Calling .GetSSHHostname
	I0229 19:19:38.270082   55255 ssh_runner.go:195] Run: cat /version.json
	I0229 19:19:38.270107   55255 main.go:141] libmachine: (calico-587185) Calling .GetSSHHostname
	I0229 19:19:38.273167   55255 main.go:141] libmachine: (calico-587185) DBG | domain calico-587185 has defined MAC address 52:54:00:2c:08:6d in network mk-calico-587185
	I0229 19:19:38.273396   55255 main.go:141] libmachine: (calico-587185) DBG | domain calico-587185 has defined MAC address 52:54:00:2c:08:6d in network mk-calico-587185
	I0229 19:19:38.273824   55255 main.go:141] libmachine: (calico-587185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:08:6d", ip: ""} in network mk-calico-587185: {Iface:virbr4 ExpiryTime:2024-02-29 20:19:27 +0000 UTC Type:0 Mac:52:54:00:2c:08:6d Iaid: IPaddr:192.168.72.73 Prefix:24 Hostname:calico-587185 Clientid:01:52:54:00:2c:08:6d}
	I0229 19:19:38.273849   55255 main.go:141] libmachine: (calico-587185) DBG | domain calico-587185 has defined IP address 192.168.72.73 and MAC address 52:54:00:2c:08:6d in network mk-calico-587185
	I0229 19:19:38.273871   55255 main.go:141] libmachine: (calico-587185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:08:6d", ip: ""} in network mk-calico-587185: {Iface:virbr4 ExpiryTime:2024-02-29 20:19:27 +0000 UTC Type:0 Mac:52:54:00:2c:08:6d Iaid: IPaddr:192.168.72.73 Prefix:24 Hostname:calico-587185 Clientid:01:52:54:00:2c:08:6d}
	I0229 19:19:38.273890   55255 main.go:141] libmachine: (calico-587185) DBG | domain calico-587185 has defined IP address 192.168.72.73 and MAC address 52:54:00:2c:08:6d in network mk-calico-587185
	I0229 19:19:38.273980   55255 main.go:141] libmachine: (calico-587185) Calling .GetSSHPort
	I0229 19:19:38.274111   55255 main.go:141] libmachine: (calico-587185) Calling .GetSSHPort
	I0229 19:19:38.274209   55255 main.go:141] libmachine: (calico-587185) Calling .GetSSHKeyPath
	I0229 19:19:38.274385   55255 main.go:141] libmachine: (calico-587185) Calling .GetSSHKeyPath
	I0229 19:19:38.274390   55255 main.go:141] libmachine: (calico-587185) Calling .GetSSHUsername
	I0229 19:19:38.274540   55255 main.go:141] libmachine: (calico-587185) Calling .GetSSHUsername
	I0229 19:19:38.274549   55255 sshutil.go:53] new ssh client: &{IP:192.168.72.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/calico-587185/id_rsa Username:docker}
	I0229 19:19:38.274673   55255 sshutil.go:53] new ssh client: &{IP:192.168.72.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/calico-587185/id_rsa Username:docker}
	I0229 19:19:38.384747   55255 ssh_runner.go:195] Run: systemctl --version
	I0229 19:19:38.391663   55255 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0229 19:19:38.556967   55255 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 19:19:38.564483   55255 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 19:19:38.564555   55255 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 19:19:38.585740   55255 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 19:19:38.585765   55255 start.go:475] detecting cgroup driver to use...
	I0229 19:19:38.585862   55255 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 19:19:38.610253   55255 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 19:19:38.627712   55255 docker.go:217] disabling cri-docker service (if available) ...
	I0229 19:19:38.627792   55255 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 19:19:38.644726   55255 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 19:19:38.660696   55255 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 19:19:38.800492   55255 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 19:19:38.980717   55255 docker.go:233] disabling docker service ...
	I0229 19:19:38.980789   55255 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 19:19:38.999500   55255 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 19:19:39.014021   55255 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 19:19:39.159228   55255 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 19:19:39.295768   55255 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 19:19:39.313310   55255 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 19:19:39.341447   55255 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0229 19:19:39.341509   55255 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 19:19:39.355349   55255 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0229 19:19:39.355413   55255 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 19:19:39.369934   55255 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 19:19:39.384722   55255 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 19:19:39.397677   55255 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 19:19:39.411136   55255 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 19:19:39.424779   55255 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 19:19:39.424850   55255 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0229 19:19:39.443539   55255 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 19:19:39.456576   55255 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 19:19:39.628007   55255 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0229 19:19:39.789366   55255 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0229 19:19:39.789448   55255 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0229 19:19:39.795351   55255 start.go:543] Will wait 60s for crictl version
	I0229 19:19:39.795412   55255 ssh_runner.go:195] Run: which crictl
	I0229 19:19:39.799592   55255 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 19:19:39.841907   55255 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0229 19:19:39.841982   55255 ssh_runner.go:195] Run: crio --version
	I0229 19:19:39.881885   55255 ssh_runner.go:195] Run: crio --version
	I0229 19:19:39.928051   55255 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0229 19:19:40.092466   54399 node_ready.go:49] node "kindnet-587185" has status "Ready":"True"
	I0229 19:19:40.092487   54399 node_ready.go:38] duration metric: took 5.004903081s waiting for node "kindnet-587185" to be "Ready" ...
	I0229 19:19:40.092496   54399 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 19:19:40.101866   54399 pod_ready.go:78] waiting up to 15m0s for pod "coredns-5dd5756b68-zk479" in "kube-system" namespace to be "Ready" ...
	I0229 19:19:41.615745   54399 pod_ready.go:92] pod "coredns-5dd5756b68-zk479" in "kube-system" namespace has status "Ready":"True"
	I0229 19:19:41.615777   54399 pod_ready.go:81] duration metric: took 1.513883976s waiting for pod "coredns-5dd5756b68-zk479" in "kube-system" namespace to be "Ready" ...
	I0229 19:19:41.615792   54399 pod_ready.go:78] waiting up to 15m0s for pod "etcd-kindnet-587185" in "kube-system" namespace to be "Ready" ...
	I0229 19:19:41.627495   54399 pod_ready.go:92] pod "etcd-kindnet-587185" in "kube-system" namespace has status "Ready":"True"
	I0229 19:19:41.627517   54399 pod_ready.go:81] duration metric: took 11.717474ms waiting for pod "etcd-kindnet-587185" in "kube-system" namespace to be "Ready" ...
	I0229 19:19:41.627528   54399 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-kindnet-587185" in "kube-system" namespace to be "Ready" ...
	I0229 19:19:41.635169   54399 pod_ready.go:92] pod "kube-apiserver-kindnet-587185" in "kube-system" namespace has status "Ready":"True"
	I0229 19:19:41.635195   54399 pod_ready.go:81] duration metric: took 7.65917ms waiting for pod "kube-apiserver-kindnet-587185" in "kube-system" namespace to be "Ready" ...
	I0229 19:19:41.635209   54399 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-kindnet-587185" in "kube-system" namespace to be "Ready" ...
	I0229 19:19:41.650069   54399 pod_ready.go:92] pod "kube-controller-manager-kindnet-587185" in "kube-system" namespace has status "Ready":"True"
	I0229 19:19:41.650098   54399 pod_ready.go:81] duration metric: took 14.881389ms waiting for pod "kube-controller-manager-kindnet-587185" in "kube-system" namespace to be "Ready" ...
	I0229 19:19:41.650113   54399 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-bplk8" in "kube-system" namespace to be "Ready" ...
	I0229 19:19:41.692974   54399 pod_ready.go:92] pod "kube-proxy-bplk8" in "kube-system" namespace has status "Ready":"True"
	I0229 19:19:41.692998   54399 pod_ready.go:81] duration metric: took 42.876809ms waiting for pod "kube-proxy-bplk8" in "kube-system" namespace to be "Ready" ...
	I0229 19:19:41.693013   54399 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-kindnet-587185" in "kube-system" namespace to be "Ready" ...
	I0229 19:19:42.091800   54399 pod_ready.go:92] pod "kube-scheduler-kindnet-587185" in "kube-system" namespace has status "Ready":"True"
	I0229 19:19:42.091837   54399 pod_ready.go:81] duration metric: took 398.809652ms waiting for pod "kube-scheduler-kindnet-587185" in "kube-system" namespace to be "Ready" ...
	I0229 19:19:42.091854   54399 pod_ready.go:38] duration metric: took 1.999345057s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 19:19:42.091868   54399 api_server.go:52] waiting for apiserver process to appear ...
	I0229 19:19:42.091932   54399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:19:42.109151   54399 api_server.go:72] duration metric: took 8.173367718s to wait for apiserver process to appear ...
	I0229 19:19:42.109178   54399 api_server.go:88] waiting for apiserver healthz status ...
	I0229 19:19:42.109199   54399 api_server.go:253] Checking apiserver healthz at https://192.168.61.15:8443/healthz ...
	I0229 19:19:42.114341   54399 api_server.go:279] https://192.168.61.15:8443/healthz returned 200:
	ok
	I0229 19:19:42.116913   54399 api_server.go:141] control plane version: v1.28.4
	I0229 19:19:42.116943   54399 api_server.go:131] duration metric: took 7.756355ms to wait for apiserver health ...
	I0229 19:19:42.116953   54399 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 19:19:42.296893   54399 system_pods.go:59] 8 kube-system pods found
	I0229 19:19:42.296933   54399 system_pods.go:61] "coredns-5dd5756b68-zk479" [aacc873c-ec9e-4d94-86b2-5d8df444bb6a] Running
	I0229 19:19:42.296942   54399 system_pods.go:61] "etcd-kindnet-587185" [6a9d2086-c0b7-4bff-a57f-afe968d87bf0] Running
	I0229 19:19:42.296949   54399 system_pods.go:61] "kindnet-x7txg" [f7242b61-315b-44c1-87c5-3b0e8fcb4d13] Running
	I0229 19:19:42.296959   54399 system_pods.go:61] "kube-apiserver-kindnet-587185" [e33c9090-a086-4fee-8308-c0e05f003992] Running
	I0229 19:19:42.296966   54399 system_pods.go:61] "kube-controller-manager-kindnet-587185" [c9f7ca9d-a365-4d9f-957e-55942ce09fa0] Running
	I0229 19:19:42.296976   54399 system_pods.go:61] "kube-proxy-bplk8" [b6c31556-7830-4cd4-bd76-9755ab18744c] Running
	I0229 19:19:42.296983   54399 system_pods.go:61] "kube-scheduler-kindnet-587185" [96374d7f-2809-4b86-8f07-bc6a787ec22d] Running
	I0229 19:19:42.296990   54399 system_pods.go:61] "storage-provisioner" [3b260b7f-7191-4b84-8bee-f30caad344dd] Running
	I0229 19:19:42.296999   54399 system_pods.go:74] duration metric: took 180.038503ms to wait for pod list to return data ...
	I0229 19:19:42.297009   54399 default_sa.go:34] waiting for default service account to be created ...
	I0229 19:19:42.493546   54399 default_sa.go:45] found service account: "default"
	I0229 19:19:42.493577   54399 default_sa.go:55] duration metric: took 196.556677ms for default service account to be created ...
	I0229 19:19:42.493589   54399 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 19:19:42.696561   54399 system_pods.go:86] 8 kube-system pods found
	I0229 19:19:42.696601   54399 system_pods.go:89] "coredns-5dd5756b68-zk479" [aacc873c-ec9e-4d94-86b2-5d8df444bb6a] Running
	I0229 19:19:42.696610   54399 system_pods.go:89] "etcd-kindnet-587185" [6a9d2086-c0b7-4bff-a57f-afe968d87bf0] Running
	I0229 19:19:42.696617   54399 system_pods.go:89] "kindnet-x7txg" [f7242b61-315b-44c1-87c5-3b0e8fcb4d13] Running
	I0229 19:19:42.696623   54399 system_pods.go:89] "kube-apiserver-kindnet-587185" [e33c9090-a086-4fee-8308-c0e05f003992] Running
	I0229 19:19:42.696631   54399 system_pods.go:89] "kube-controller-manager-kindnet-587185" [c9f7ca9d-a365-4d9f-957e-55942ce09fa0] Running
	I0229 19:19:42.696637   54399 system_pods.go:89] "kube-proxy-bplk8" [b6c31556-7830-4cd4-bd76-9755ab18744c] Running
	I0229 19:19:42.696644   54399 system_pods.go:89] "kube-scheduler-kindnet-587185" [96374d7f-2809-4b86-8f07-bc6a787ec22d] Running
	I0229 19:19:42.696650   54399 system_pods.go:89] "storage-provisioner" [3b260b7f-7191-4b84-8bee-f30caad344dd] Running
	I0229 19:19:42.696662   54399 system_pods.go:126] duration metric: took 203.066609ms to wait for k8s-apps to be running ...
	I0229 19:19:42.696677   54399 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 19:19:42.696736   54399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 19:19:42.717576   54399 system_svc.go:56] duration metric: took 20.892209ms WaitForService to wait for kubelet.
	I0229 19:19:42.717604   54399 kubeadm.go:581] duration metric: took 8.781826261s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 19:19:42.717625   54399 node_conditions.go:102] verifying NodePressure condition ...
	I0229 19:19:42.897109   54399 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 19:19:42.897145   54399 node_conditions.go:123] node cpu capacity is 2
	I0229 19:19:42.897168   54399 node_conditions.go:105] duration metric: took 179.537842ms to run NodePressure ...
	I0229 19:19:42.897183   54399 start.go:228] waiting for startup goroutines ...
	I0229 19:19:42.897192   54399 start.go:233] waiting for cluster config update ...
	I0229 19:19:42.897205   54399 start.go:242] writing updated cluster config ...
	I0229 19:19:42.932860   54399 ssh_runner.go:195] Run: rm -f paused
	I0229 19:19:43.007239   54399 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0229 19:19:43.142539   54399 out.go:177] * Done! kubectl is now configured to use "kindnet-587185" cluster and "default" namespace by default
	I0229 19:19:39.929307   55255 main.go:141] libmachine: (calico-587185) Calling .GetIP
	I0229 19:19:39.932680   55255 main.go:141] libmachine: (calico-587185) DBG | domain calico-587185 has defined MAC address 52:54:00:2c:08:6d in network mk-calico-587185
	I0229 19:19:39.933160   55255 main.go:141] libmachine: (calico-587185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:08:6d", ip: ""} in network mk-calico-587185: {Iface:virbr4 ExpiryTime:2024-02-29 20:19:27 +0000 UTC Type:0 Mac:52:54:00:2c:08:6d Iaid: IPaddr:192.168.72.73 Prefix:24 Hostname:calico-587185 Clientid:01:52:54:00:2c:08:6d}
	I0229 19:19:39.933185   55255 main.go:141] libmachine: (calico-587185) DBG | domain calico-587185 has defined IP address 192.168.72.73 and MAC address 52:54:00:2c:08:6d in network mk-calico-587185
	I0229 19:19:39.933406   55255 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0229 19:19:39.939382   55255 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 19:19:39.957918   55255 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0229 19:19:39.957978   55255 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 19:19:40.004645   55255 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0229 19:19:40.004726   55255 ssh_runner.go:195] Run: which lz4
	I0229 19:19:40.009563   55255 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0229 19:19:40.014409   55255 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 19:19:40.014432   55255 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0229 19:19:42.028410   55255 crio.go:444] Took 2.018868 seconds to copy over tarball
	I0229 19:19:42.028534   55255 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 19:19:45.225758   55255 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.197192496s)
	I0229 19:19:45.225790   55255 crio.go:451] Took 3.197346 seconds to extract the tarball
	I0229 19:19:45.225802   55255 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 19:19:45.275142   55255 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 19:19:45.331342   55255 crio.go:496] all images are preloaded for cri-o runtime.
	I0229 19:19:45.331367   55255 cache_images.go:84] Images are preloaded, skipping loading
	I0229 19:19:45.331452   55255 ssh_runner.go:195] Run: crio config
	I0229 19:19:45.395381   55255 cni.go:84] Creating CNI manager for "calico"
	I0229 19:19:45.395424   55255 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 19:19:45.395447   55255 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.73 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-587185 NodeName:calico-587185 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.73"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.73 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 19:19:45.395618   55255 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.73
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-587185"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.73
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.73"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 19:19:45.395727   55255 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=calico-587185 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.73
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:calico-587185 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:}
	I0229 19:19:45.395796   55255 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0229 19:19:45.408560   55255 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 19:19:45.408636   55255 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 19:19:45.420541   55255 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I0229 19:19:45.440271   55255 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 19:19:45.462691   55255 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I0229 19:19:45.484938   55255 ssh_runner.go:195] Run: grep 192.168.72.73	control-plane.minikube.internal$ /etc/hosts
	I0229 19:19:45.489740   55255 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.73	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 19:19:45.506714   55255 certs.go:56] Setting up /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/calico-587185 for IP: 192.168.72.73
	I0229 19:19:45.506766   55255 certs.go:190] acquiring lock for shared ca certs: {Name:mk3505d2468da66ac20dc3ebf913782cecf1ba0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:19:45.506917   55255 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18259-6428/.minikube/ca.key
	I0229 19:19:45.506969   55255 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.key
	I0229 19:19:45.507007   55255 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/calico-587185/client.key
	I0229 19:19:45.507045   55255 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/calico-587185/client.crt with IP's: []
	I0229 19:19:45.939712   55255 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/calico-587185/client.crt ...
	I0229 19:19:45.939741   55255 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/calico-587185/client.crt: {Name:mk42d73f71bb67edb47c9f8534e9e24ae3c43f34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:19:45.939907   55255 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/calico-587185/client.key ...
	I0229 19:19:45.939922   55255 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/calico-587185/client.key: {Name:mk8b076d60f3694585f2be37b4b5d6d9774e4484 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:19:45.939998   55255 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/calico-587185/apiserver.key.e9608d4e
	I0229 19:19:45.940013   55255 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/calico-587185/apiserver.crt.e9608d4e with IP's: [192.168.72.73 10.96.0.1 127.0.0.1 10.0.0.1]
	I0229 19:19:46.077061   55255 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/calico-587185/apiserver.crt.e9608d4e ...
	I0229 19:19:46.077089   55255 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/calico-587185/apiserver.crt.e9608d4e: {Name:mk4ab99c0a1f9ffe1769f9f9306d7804e86b2ee7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:19:46.077246   55255 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/calico-587185/apiserver.key.e9608d4e ...
	I0229 19:19:46.077259   55255 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/calico-587185/apiserver.key.e9608d4e: {Name:mk490eb9b36caf8466cef7c4ae92f030a28cf57f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:19:46.077333   55255 certs.go:337] copying /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/calico-587185/apiserver.crt.e9608d4e -> /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/calico-587185/apiserver.crt
	I0229 19:19:46.077425   55255 certs.go:341] copying /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/calico-587185/apiserver.key.e9608d4e -> /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/calico-587185/apiserver.key
	I0229 19:19:46.077478   55255 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/calico-587185/proxy-client.key
	I0229 19:19:46.077492   55255 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/calico-587185/proxy-client.crt with IP's: []
	I0229 19:19:46.121684   55255 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/calico-587185/proxy-client.crt ...
	I0229 19:19:46.121712   55255 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/calico-587185/proxy-client.crt: {Name:mkc80ec51bff9034a01bddfb82b940a93e6bbe0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:19:46.121868   55255 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/calico-587185/proxy-client.key ...
	I0229 19:19:46.121878   55255 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/calico-587185/proxy-client.key: {Name:mkc87be3bd39fc83e4148d9c78fb33092a8b9d99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:19:46.122035   55255 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651.pem (1338 bytes)
	W0229 19:19:46.122073   55255 certs.go:433] ignoring /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651_empty.pem, impossibly tiny 0 bytes
	I0229 19:19:46.122083   55255 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem (1675 bytes)
	I0229 19:19:46.122106   55255 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem (1082 bytes)
	I0229 19:19:46.122132   55255 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem (1123 bytes)
	I0229 19:19:46.122158   55255 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem (1675 bytes)
	I0229 19:19:46.122194   55255 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem (1708 bytes)
	I0229 19:19:46.122771   55255 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/calico-587185/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 19:19:46.151050   55255 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/calico-587185/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0229 19:19:46.180639   55255 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/calico-587185/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 19:19:46.210719   55255 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/calico-587185/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0229 19:19:46.239743   55255 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 19:19:46.267909   55255 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0229 19:19:46.297025   55255 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 19:19:46.325729   55255 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0229 19:19:46.354681   55255 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 19:19:46.382993   55255 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651.pem --> /usr/share/ca-certificates/13651.pem (1338 bytes)
	I0229 19:19:46.411824   55255 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem --> /usr/share/ca-certificates/136512.pem (1708 bytes)
	I0229 19:19:46.440685   55255 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 19:19:46.461352   55255 ssh_runner.go:195] Run: openssl version
	I0229 19:19:46.467608   55255 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 19:19:46.482207   55255 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 19:19:46.487626   55255 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 17:40 /usr/share/ca-certificates/minikubeCA.pem
	I0229 19:19:46.487683   55255 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 19:19:46.494258   55255 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 19:19:46.509641   55255 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13651.pem && ln -fs /usr/share/ca-certificates/13651.pem /etc/ssl/certs/13651.pem"
	I0229 19:19:46.524684   55255 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13651.pem
	I0229 19:19:46.529894   55255 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 17:49 /usr/share/ca-certificates/13651.pem
	I0229 19:19:46.529947   55255 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13651.pem
	I0229 19:19:46.536589   55255 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13651.pem /etc/ssl/certs/51391683.0"
	I0229 19:19:46.551759   55255 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136512.pem && ln -fs /usr/share/ca-certificates/136512.pem /etc/ssl/certs/136512.pem"
	I0229 19:19:46.566615   55255 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136512.pem
	I0229 19:19:46.571954   55255 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 17:49 /usr/share/ca-certificates/136512.pem
	I0229 19:19:46.572025   55255 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136512.pem
	I0229 19:19:46.579120   55255 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136512.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 19:19:46.593652   55255 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 19:19:46.598262   55255 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 19:19:46.598318   55255 kubeadm.go:404] StartCluster: {Name:calico-587185 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:calico-587185 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.73 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 19:19:46.598390   55255 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0229 19:19:46.598446   55255 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 19:19:46.645687   55255 cri.go:89] found id: ""
	I0229 19:19:46.645796   55255 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 19:19:46.658351   55255 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 19:19:46.670906   55255 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 19:19:46.682739   55255 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 19:19:46.682773   55255 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0229 19:19:46.737940   55255 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0229 19:19:46.738004   55255 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 19:19:46.886206   55255 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 19:19:46.886370   55255 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 19:19:46.886523   55255 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 19:19:47.153239   55255 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	
	
	==> CRI-O <==
	Feb 29 19:19:48 default-k8s-diff-port-153528 crio[674]: time="2024-02-29 19:19:48.061346855Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a10f7bad-8895-4af4-83f3-3dc8bc044aed name=/runtime.v1.RuntimeService/Version
	Feb 29 19:19:48 default-k8s-diff-port-153528 crio[674]: time="2024-02-29 19:19:48.062840386Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8feacf38-6856-443a-a511-13e55524e191 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 19:19:48 default-k8s-diff-port-153528 crio[674]: time="2024-02-29 19:19:48.063233212Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709234388063212559,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8feacf38-6856-443a-a511-13e55524e191 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 19:19:48 default-k8s-diff-port-153528 crio[674]: time="2024-02-29 19:19:48.063885800Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7cb79578-75f5-4e07-a3db-901e8a1f55bc name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:19:48 default-k8s-diff-port-153528 crio[674]: time="2024-02-29 19:19:48.063938129Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7cb79578-75f5-4e07-a3db-901e8a1f55bc name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:19:48 default-k8s-diff-port-153528 crio[674]: time="2024-02-29 19:19:48.064100718Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dd100a6a78ff38195f8c50e042b5378c2cd6e061b69c596fd75b45030c7abb3f,PodSandboxId:fa92f6f8dc963965dc09e7002094477c92b2ffb0bfdb58c6457fd36a3b6dbe1f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709233425069723905,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0525367f-c4e1-4d3e-945b-69f408e9fcb0,},Annotations:map[string]string{io.kubernetes.container.hash: 2f27b628,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3783ae6a7523e08992c41219eb196d6d50ec4a3033a63f5ac801053aac04cc3,PodSandboxId:d54922b282ed1ddf53773690fc9d42a5d43f36a492018247f212ce0335c0adec,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709233422804016064,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-fmptg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac14ccc5-53fb-41c6-b09a-bdb801f91088,},Annotations:map[string]string{io.kubernetes.container.hash: 760ceb5f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66a474fccaab4249c196a12339ffecf4b3a4a079d53db11cc6f1b249574da09f,PodSandboxId:7611ffeb0a2a37f9d736fb6beee564b901e5355493b9ffbda739259a64524150,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709233421592500015,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bvrxx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: b826c147-0486-405d-95c7-9b029349e27c,},Annotations:map[string]string{io.kubernetes.container.hash: a335adc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ad8f5f1b340cedf651a9032d3eb4e30640730f4284611f933e96f4ecf76ecff,PodSandboxId:e4243c26556d844011b66db88fdbe6db508424688d95cf1293c1855b53cf4016,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709233402721000236,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-153528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: cbfd49db3e5a72e0f323c7205da12bfe,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea63327422de95ad071cbb5dd8bd32fb4b052a80e014485ef75bf233da53bebf,PodSandboxId:eba21c4e573ce525969137ac5632ffa7e0806f5d50d138d6266963aa6f3cf388,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709233402667972238,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-153528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6333006f11b04aef2d656b0
7d9ad7aee,},Annotations:map[string]string{io.kubernetes.container.hash: cfae2ccb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afb68f5e908cebe734b371147974874028af491c708e5cb5198d1f84d7c1a1ec,PodSandboxId:5585157703fb8d1200d9fb3419298f22e63788f5e7642579a59af16a0aa4ee31,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709233402657225134,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-153528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 300cdbf38621f03273215bd34
d70f268,},Annotations:map[string]string{io.kubernetes.container.hash: 2226a314,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9076d6488b1c81a82f5384182b068e3cd06217027b2030717efdbf4b6df76a3,PodSandboxId:aca74cc915a027472b2d39ec7aa05b02ac93fc5c0648eb05a259392b62a497ed,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709233402543647561,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-153528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
5fe9c3d60541d7b57434b659717008ad,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7cb79578-75f5-4e07-a3db-901e8a1f55bc name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:19:48 default-k8s-diff-port-153528 crio[674]: time="2024-02-29 19:19:48.107514567Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=11021521-8017-4fcc-a91d-ab4969ac53cb name=/runtime.v1.RuntimeService/Version
	Feb 29 19:19:48 default-k8s-diff-port-153528 crio[674]: time="2024-02-29 19:19:48.107642838Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=11021521-8017-4fcc-a91d-ab4969ac53cb name=/runtime.v1.RuntimeService/Version
	Feb 29 19:19:48 default-k8s-diff-port-153528 crio[674]: time="2024-02-29 19:19:48.108762643Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=65029392-05da-43cf-9e14-b2cf40cfddda name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 19:19:48 default-k8s-diff-port-153528 crio[674]: time="2024-02-29 19:19:48.109147985Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709234388109128118,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=65029392-05da-43cf-9e14-b2cf40cfddda name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 19:19:48 default-k8s-diff-port-153528 crio[674]: time="2024-02-29 19:19:48.109853668Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=25f0aa51-01c8-45df-8fd4-7a86f350184d name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:19:48 default-k8s-diff-port-153528 crio[674]: time="2024-02-29 19:19:48.109902313Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=25f0aa51-01c8-45df-8fd4-7a86f350184d name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:19:48 default-k8s-diff-port-153528 crio[674]: time="2024-02-29 19:19:48.110083062Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dd100a6a78ff38195f8c50e042b5378c2cd6e061b69c596fd75b45030c7abb3f,PodSandboxId:fa92f6f8dc963965dc09e7002094477c92b2ffb0bfdb58c6457fd36a3b6dbe1f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709233425069723905,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0525367f-c4e1-4d3e-945b-69f408e9fcb0,},Annotations:map[string]string{io.kubernetes.container.hash: 2f27b628,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3783ae6a7523e08992c41219eb196d6d50ec4a3033a63f5ac801053aac04cc3,PodSandboxId:d54922b282ed1ddf53773690fc9d42a5d43f36a492018247f212ce0335c0adec,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709233422804016064,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-fmptg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac14ccc5-53fb-41c6-b09a-bdb801f91088,},Annotations:map[string]string{io.kubernetes.container.hash: 760ceb5f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66a474fccaab4249c196a12339ffecf4b3a4a079d53db11cc6f1b249574da09f,PodSandboxId:7611ffeb0a2a37f9d736fb6beee564b901e5355493b9ffbda739259a64524150,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709233421592500015,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bvrxx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: b826c147-0486-405d-95c7-9b029349e27c,},Annotations:map[string]string{io.kubernetes.container.hash: a335adc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ad8f5f1b340cedf651a9032d3eb4e30640730f4284611f933e96f4ecf76ecff,PodSandboxId:e4243c26556d844011b66db88fdbe6db508424688d95cf1293c1855b53cf4016,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709233402721000236,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-153528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: cbfd49db3e5a72e0f323c7205da12bfe,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea63327422de95ad071cbb5dd8bd32fb4b052a80e014485ef75bf233da53bebf,PodSandboxId:eba21c4e573ce525969137ac5632ffa7e0806f5d50d138d6266963aa6f3cf388,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709233402667972238,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-153528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6333006f11b04aef2d656b0
7d9ad7aee,},Annotations:map[string]string{io.kubernetes.container.hash: cfae2ccb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afb68f5e908cebe734b371147974874028af491c708e5cb5198d1f84d7c1a1ec,PodSandboxId:5585157703fb8d1200d9fb3419298f22e63788f5e7642579a59af16a0aa4ee31,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709233402657225134,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-153528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 300cdbf38621f03273215bd34
d70f268,},Annotations:map[string]string{io.kubernetes.container.hash: 2226a314,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9076d6488b1c81a82f5384182b068e3cd06217027b2030717efdbf4b6df76a3,PodSandboxId:aca74cc915a027472b2d39ec7aa05b02ac93fc5c0648eb05a259392b62a497ed,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709233402543647561,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-153528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
5fe9c3d60541d7b57434b659717008ad,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=25f0aa51-01c8-45df-8fd4-7a86f350184d name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:19:48 default-k8s-diff-port-153528 crio[674]: time="2024-02-29 19:19:48.148261319Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=8c748b9c-543f-48a3-baf7-117972ba69d0 name=/runtime.v1.RuntimeService/ListPodSandbox
	Feb 29 19:19:48 default-k8s-diff-port-153528 crio[674]: time="2024-02-29 19:19:48.148481551Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:fa92f6f8dc963965dc09e7002094477c92b2ffb0bfdb58c6457fd36a3b6dbe1f,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:0525367f-c4e1-4d3e-945b-69f408e9fcb0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1709233424960829618,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0525367f-c4e1-4d3e-945b-69f408e9fcb0,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespac
e\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-02-29T19:03:44.653307525Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:864172c3d2b71876ba5e8920a219a339174adb3667c9ea38dd3a022e67d707d3,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-v95ws,Uid:e3545189-e705-4d6e-bab6-e1eceba83c0f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1709233424769963200,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-v95ws,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3545189-e705-4d6e-bab6-e
1eceba83c0f,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-02-29T19:03:44.456923899Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d54922b282ed1ddf53773690fc9d42a5d43f36a492018247f212ce0335c0adec,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-fmptg,Uid:ac14ccc5-53fb-41c6-b09a-bdb801f91088,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1709233421929707914,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-fmptg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac14ccc5-53fb-41c6-b09a-bdb801f91088,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-02-29T19:03:41.604097895Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7611ffeb0a2a37f9d736fb6beee564b901e5355493b9ffbda739259a64524150,Metadata:&PodSandboxMetadata{Name:kube-proxy-bvrxx,Uid:b826c147-0486-40
5d-95c7-9b029349e27c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1709233421455828637,Labels:map[string]string{controller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-bvrxx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b826c147-0486-405d-95c7-9b029349e27c,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-02-29T19:03:41.138097885Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5585157703fb8d1200d9fb3419298f22e63788f5e7642579a59af16a0aa4ee31,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-153528,Uid:300cdbf38621f03273215bd34d70f268,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1709233402380436825,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-153528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 300cdbf38621f03273215bd34d70f268,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.210:8444,kubernetes.io/config.hash: 300cdbf38621f03273215bd34d70f268,kubernetes.io/config.seen: 2024-02-29T19:03:21.930893052Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e4243c26556d844011b66db88fdbe6db508424688d95cf1293c1855b53cf4016,Metadata:&PodSandboxMetadata{Name:kube-scheduler-default-k8s-diff-port-153528,Uid:cbfd49db3e5a72e0f323c7205da12bfe,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1709233402378805009,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-153528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbfd49db3e5a72e0f323c7205da12bfe,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: cbfd49db3e5a72e0f323c7205da12bfe,kubernetes.io/config.seen: 2024-02-2
9T19:03:21.930895052Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:eba21c4e573ce525969137ac5632ffa7e0806f5d50d138d6266963aa6f3cf388,Metadata:&PodSandboxMetadata{Name:etcd-default-k8s-diff-port-153528,Uid:6333006f11b04aef2d656b07d9ad7aee,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1709233402369234710,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-default-k8s-diff-port-153528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6333006f11b04aef2d656b07d9ad7aee,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.210:2379,kubernetes.io/config.hash: 6333006f11b04aef2d656b07d9ad7aee,kubernetes.io/config.seen: 2024-02-29T19:03:21.930889111Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:aca74cc915a027472b2d39ec7aa05b02ac93fc5c0648eb05a259392b62a497ed,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-default-k
8s-diff-port-153528,Uid:5fe9c3d60541d7b57434b659717008ad,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1709233402361289481,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-153528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fe9c3d60541d7b57434b659717008ad,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 5fe9c3d60541d7b57434b659717008ad,kubernetes.io/config.seen: 2024-02-29T19:03:21.930894236Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=8c748b9c-543f-48a3-baf7-117972ba69d0 name=/runtime.v1.RuntimeService/ListPodSandbox
	Feb 29 19:19:48 default-k8s-diff-port-153528 crio[674]: time="2024-02-29 19:19:48.150223709Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c80d564a-a153-46ef-a23b-37fb7a3067f3 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:19:48 default-k8s-diff-port-153528 crio[674]: time="2024-02-29 19:19:48.150284367Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c80d564a-a153-46ef-a23b-37fb7a3067f3 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:19:48 default-k8s-diff-port-153528 crio[674]: time="2024-02-29 19:19:48.150442900Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dd100a6a78ff38195f8c50e042b5378c2cd6e061b69c596fd75b45030c7abb3f,PodSandboxId:fa92f6f8dc963965dc09e7002094477c92b2ffb0bfdb58c6457fd36a3b6dbe1f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709233425069723905,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0525367f-c4e1-4d3e-945b-69f408e9fcb0,},Annotations:map[string]string{io.kubernetes.container.hash: 2f27b628,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3783ae6a7523e08992c41219eb196d6d50ec4a3033a63f5ac801053aac04cc3,PodSandboxId:d54922b282ed1ddf53773690fc9d42a5d43f36a492018247f212ce0335c0adec,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709233422804016064,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-fmptg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac14ccc5-53fb-41c6-b09a-bdb801f91088,},Annotations:map[string]string{io.kubernetes.container.hash: 760ceb5f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66a474fccaab4249c196a12339ffecf4b3a4a079d53db11cc6f1b249574da09f,PodSandboxId:7611ffeb0a2a37f9d736fb6beee564b901e5355493b9ffbda739259a64524150,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709233421592500015,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bvrxx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: b826c147-0486-405d-95c7-9b029349e27c,},Annotations:map[string]string{io.kubernetes.container.hash: a335adc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ad8f5f1b340cedf651a9032d3eb4e30640730f4284611f933e96f4ecf76ecff,PodSandboxId:e4243c26556d844011b66db88fdbe6db508424688d95cf1293c1855b53cf4016,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709233402721000236,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-153528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: cbfd49db3e5a72e0f323c7205da12bfe,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea63327422de95ad071cbb5dd8bd32fb4b052a80e014485ef75bf233da53bebf,PodSandboxId:eba21c4e573ce525969137ac5632ffa7e0806f5d50d138d6266963aa6f3cf388,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709233402667972238,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-153528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6333006f11b04aef2d656b0
7d9ad7aee,},Annotations:map[string]string{io.kubernetes.container.hash: cfae2ccb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afb68f5e908cebe734b371147974874028af491c708e5cb5198d1f84d7c1a1ec,PodSandboxId:5585157703fb8d1200d9fb3419298f22e63788f5e7642579a59af16a0aa4ee31,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709233402657225134,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-153528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 300cdbf38621f03273215bd34
d70f268,},Annotations:map[string]string{io.kubernetes.container.hash: 2226a314,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9076d6488b1c81a82f5384182b068e3cd06217027b2030717efdbf4b6df76a3,PodSandboxId:aca74cc915a027472b2d39ec7aa05b02ac93fc5c0648eb05a259392b62a497ed,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709233402543647561,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-153528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
5fe9c3d60541d7b57434b659717008ad,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c80d564a-a153-46ef-a23b-37fb7a3067f3 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:19:48 default-k8s-diff-port-153528 crio[674]: time="2024-02-29 19:19:48.154892390Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=98dd18a2-ae9b-489e-849e-45a17f3c7e9e name=/runtime.v1.RuntimeService/Version
	Feb 29 19:19:48 default-k8s-diff-port-153528 crio[674]: time="2024-02-29 19:19:48.154973018Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=98dd18a2-ae9b-489e-849e-45a17f3c7e9e name=/runtime.v1.RuntimeService/Version
	Feb 29 19:19:48 default-k8s-diff-port-153528 crio[674]: time="2024-02-29 19:19:48.157964218Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5a081ead-6e7b-4bf8-b6d7-e1a590e8a940 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 19:19:48 default-k8s-diff-port-153528 crio[674]: time="2024-02-29 19:19:48.158448800Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709234388158420391,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5a081ead-6e7b-4bf8-b6d7-e1a590e8a940 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 19:19:48 default-k8s-diff-port-153528 crio[674]: time="2024-02-29 19:19:48.159216662Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f2ff100b-b3cf-455a-901d-53db5820fe96 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:19:48 default-k8s-diff-port-153528 crio[674]: time="2024-02-29 19:19:48.159267681Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f2ff100b-b3cf-455a-901d-53db5820fe96 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:19:48 default-k8s-diff-port-153528 crio[674]: time="2024-02-29 19:19:48.159431308Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dd100a6a78ff38195f8c50e042b5378c2cd6e061b69c596fd75b45030c7abb3f,PodSandboxId:fa92f6f8dc963965dc09e7002094477c92b2ffb0bfdb58c6457fd36a3b6dbe1f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709233425069723905,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0525367f-c4e1-4d3e-945b-69f408e9fcb0,},Annotations:map[string]string{io.kubernetes.container.hash: 2f27b628,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3783ae6a7523e08992c41219eb196d6d50ec4a3033a63f5ac801053aac04cc3,PodSandboxId:d54922b282ed1ddf53773690fc9d42a5d43f36a492018247f212ce0335c0adec,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709233422804016064,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-fmptg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac14ccc5-53fb-41c6-b09a-bdb801f91088,},Annotations:map[string]string{io.kubernetes.container.hash: 760ceb5f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66a474fccaab4249c196a12339ffecf4b3a4a079d53db11cc6f1b249574da09f,PodSandboxId:7611ffeb0a2a37f9d736fb6beee564b901e5355493b9ffbda739259a64524150,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709233421592500015,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bvrxx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: b826c147-0486-405d-95c7-9b029349e27c,},Annotations:map[string]string{io.kubernetes.container.hash: a335adc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ad8f5f1b340cedf651a9032d3eb4e30640730f4284611f933e96f4ecf76ecff,PodSandboxId:e4243c26556d844011b66db88fdbe6db508424688d95cf1293c1855b53cf4016,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709233402721000236,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-153528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: cbfd49db3e5a72e0f323c7205da12bfe,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea63327422de95ad071cbb5dd8bd32fb4b052a80e014485ef75bf233da53bebf,PodSandboxId:eba21c4e573ce525969137ac5632ffa7e0806f5d50d138d6266963aa6f3cf388,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709233402667972238,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-153528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6333006f11b04aef2d656b0
7d9ad7aee,},Annotations:map[string]string{io.kubernetes.container.hash: cfae2ccb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afb68f5e908cebe734b371147974874028af491c708e5cb5198d1f84d7c1a1ec,PodSandboxId:5585157703fb8d1200d9fb3419298f22e63788f5e7642579a59af16a0aa4ee31,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709233402657225134,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-153528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 300cdbf38621f03273215bd34
d70f268,},Annotations:map[string]string{io.kubernetes.container.hash: 2226a314,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9076d6488b1c81a82f5384182b068e3cd06217027b2030717efdbf4b6df76a3,PodSandboxId:aca74cc915a027472b2d39ec7aa05b02ac93fc5c0648eb05a259392b62a497ed,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709233402543647561,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-153528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
5fe9c3d60541d7b57434b659717008ad,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f2ff100b-b3cf-455a-901d-53db5820fe96 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	dd100a6a78ff3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 minutes ago      Running             storage-provisioner       0                   fa92f6f8dc963       storage-provisioner
	f3783ae6a7523       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   16 minutes ago      Running             coredns                   0                   d54922b282ed1       coredns-5dd5756b68-fmptg
	66a474fccaab4       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   16 minutes ago      Running             kube-proxy                0                   7611ffeb0a2a3       kube-proxy-bvrxx
	7ad8f5f1b340c       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   16 minutes ago      Running             kube-scheduler            2                   e4243c26556d8       kube-scheduler-default-k8s-diff-port-153528
	ea63327422de9       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   16 minutes ago      Running             etcd                      2                   eba21c4e573ce       etcd-default-k8s-diff-port-153528
	afb68f5e908ce       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   16 minutes ago      Running             kube-apiserver            2                   5585157703fb8       kube-apiserver-default-k8s-diff-port-153528
	f9076d6488b1c       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   16 minutes ago      Running             kube-controller-manager   2                   aca74cc915a02       kube-controller-manager-default-k8s-diff-port-153528
	
	
	==> coredns [f3783ae6a7523e08992c41219eb196d6d50ec4a3033a63f5ac801053aac04cc3] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	[INFO] 127.0.0.1:39705 - 48666 "HINFO IN 6790378613609168493.1271217274832031905. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014988537s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-153528
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-153528
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f9e09574fdbf0719156a1e892f7aeb8b71f0cf19
	                    minikube.k8s.io/name=default-k8s-diff-port-153528
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_29T19_03_29_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Feb 2024 19:03:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-153528
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Feb 2024 19:19:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Feb 2024 19:19:08 +0000   Thu, 29 Feb 2024 19:03:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Feb 2024 19:19:08 +0000   Thu, 29 Feb 2024 19:03:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Feb 2024 19:19:08 +0000   Thu, 29 Feb 2024 19:03:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Feb 2024 19:19:08 +0000   Thu, 29 Feb 2024 19:03:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.210
	  Hostname:    default-k8s-diff-port-153528
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 aad8c663d8bf4a83b64ea1f43ab2b7c3
	  System UUID:                aad8c663-d8bf-4a83-b64e-a1f43ab2b7c3
	  Boot ID:                    cdea6de5-2171-467a-b107-96f0c7ab4b21
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-fmptg                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-default-k8s-diff-port-153528                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kube-apiserver-default-k8s-diff-port-153528             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-153528    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-bvrxx                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-default-k8s-diff-port-153528             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 metrics-server-57f55c9bc5-v95ws                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 16m                kube-proxy       
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node default-k8s-diff-port-153528 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node default-k8s-diff-port-153528 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x7 over 16m)  kubelet          Node default-k8s-diff-port-153528 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  16m                kubelet          Node default-k8s-diff-port-153528 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m                kubelet          Node default-k8s-diff-port-153528 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m                kubelet          Node default-k8s-diff-port-153528 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeNotReady             16m                kubelet          Node default-k8s-diff-port-153528 status is now: NodeNotReady
	  Normal  NodeReady                16m                kubelet          Node default-k8s-diff-port-153528 status is now: NodeReady
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           16m                node-controller  Node default-k8s-diff-port-153528 event: Registered Node default-k8s-diff-port-153528 in Controller
	
	
	==> dmesg <==
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.055202] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043937] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.052603] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Feb29 18:58] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.679083] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.009844] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.065816] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065492] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.194837] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.141990] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.318484] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[ +17.262167] systemd-fstab-generator[874]: Ignoring "noauto" option for root device
	[  +0.073820] kauditd_printk_skb: 130 callbacks suppressed
	[  +5.658975] kauditd_printk_skb: 72 callbacks suppressed
	[  +6.447092] kauditd_printk_skb: 69 callbacks suppressed
	[Feb29 19:03] kauditd_printk_skb: 7 callbacks suppressed
	[  +1.326145] systemd-fstab-generator[3377]: Ignoring "noauto" option for root device
	[  +7.285665] systemd-fstab-generator[3698]: Ignoring "noauto" option for root device
	[  +0.114066] kauditd_printk_skb: 53 callbacks suppressed
	[ +12.601674] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.006417] kauditd_printk_skb: 62 callbacks suppressed
	[Feb29 19:15] hrtimer: interrupt took 5622851 ns
	
	
	==> etcd [ea63327422de95ad071cbb5dd8bd32fb4b052a80e014485ef75bf233da53bebf] <==
	{"level":"info","ts":"2024-02-29T19:18:24.317081Z","caller":"traceutil/trace.go:171","msg":"trace[809723216] compact","detail":"{revision:968; response_revision:1212; }","duration":"510.283939ms","start":"2024-02-29T19:18:23.806793Z","end":"2024-02-29T19:18:24.317077Z","steps":["trace[809723216] 'process raft request'  (duration: 371.100776ms)","trace[809723216] 'check and update compact revision'  (duration: 134.51273ms)"],"step_count":2}
	{"level":"warn","ts":"2024-02-29T19:18:24.317109Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-02-29T19:18:23.806772Z","time spent":"510.331041ms","remote":"127.0.0.1:57156","response type":"/etcdserverpb.KV/Compact","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"warn","ts":"2024-02-29T19:18:24.317236Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.744399ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-02-29T19:18:24.317302Z","caller":"traceutil/trace.go:171","msg":"trace[472750416] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1212; }","duration":"106.825758ms","start":"2024-02-29T19:18:24.210464Z","end":"2024-02-29T19:18:24.31729Z","steps":["trace[472750416] 'agreement among raft nodes before linearized reading'  (duration: 106.715039ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-29T19:18:24.317497Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"441.848009ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-02-29T19:18:24.3176Z","caller":"traceutil/trace.go:171","msg":"trace[886144333] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1212; }","duration":"442.117968ms","start":"2024-02-29T19:18:23.875473Z","end":"2024-02-29T19:18:24.317591Z","steps":["trace[886144333] 'agreement among raft nodes before linearized reading'  (duration: 441.833646ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-29T19:18:24.317626Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-02-29T19:18:23.875456Z","time spent":"442.162232ms","remote":"127.0.0.1:57300","response type":"/etcdserverpb.KV/Range","request count":0,"request size":76,"response count":0,"response size":28,"request content":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" "}
	{"level":"warn","ts":"2024-02-29T19:18:24.595985Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.418269ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2818565030578292233 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1211 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-02-29T19:18:24.596114Z","caller":"traceutil/trace.go:171","msg":"trace[1953108303] transaction","detail":"{read_only:false; response_revision:1213; number_of_response:1; }","duration":"204.886218ms","start":"2024-02-29T19:18:24.391209Z","end":"2024-02-29T19:18:24.596096Z","steps":["trace[1953108303] 'process raft request'  (duration: 79.67381ms)","trace[1953108303] 'compare'  (duration: 124.182482ms)"],"step_count":2}
	{"level":"warn","ts":"2024-02-29T19:18:43.15894Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.875559ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2818565030578292325 > lease_revoke:<id:271d8df63ff5061a>","response":"size:28"}
	{"level":"info","ts":"2024-02-29T19:18:43.159173Z","caller":"traceutil/trace.go:171","msg":"trace[1517237989] linearizableReadLoop","detail":"{readStateIndex:1429; appliedIndex:1428; }","duration":"284.710191ms","start":"2024-02-29T19:18:42.87445Z","end":"2024-02-29T19:18:43.15916Z","steps":["trace[1517237989] 'read index received'  (duration: 157.519891ms)","trace[1517237989] 'applied index is now lower than readState.Index'  (duration: 127.189097ms)"],"step_count":2}
	{"level":"warn","ts":"2024-02-29T19:18:43.159279Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"284.835703ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-02-29T19:18:43.159445Z","caller":"traceutil/trace.go:171","msg":"trace[898506660] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1228; }","duration":"284.960821ms","start":"2024-02-29T19:18:42.874426Z","end":"2024-02-29T19:18:43.159387Z","steps":["trace[898506660] 'agreement among raft nodes before linearized reading'  (duration: 284.812051ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-29T19:18:44.81515Z","caller":"traceutil/trace.go:171","msg":"trace[1281811083] transaction","detail":"{read_only:false; response_revision:1229; number_of_response:1; }","duration":"111.951049ms","start":"2024-02-29T19:18:44.703184Z","end":"2024-02-29T19:18:44.815135Z","steps":["trace[1281811083] 'process raft request'  (duration: 111.688075ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-29T19:19:09.097612Z","caller":"traceutil/trace.go:171","msg":"trace[458695538] linearizableReadLoop","detail":"{readStateIndex:1457; appliedIndex:1456; }","duration":"224.195095ms","start":"2024-02-29T19:19:08.87332Z","end":"2024-02-29T19:19:09.097515Z","steps":["trace[458695538] 'read index received'  (duration: 219.671165ms)","trace[458695538] 'applied index is now lower than readState.Index'  (duration: 4.522748ms)"],"step_count":2}
	{"level":"warn","ts":"2024-02-29T19:19:09.106619Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"233.230988ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-02-29T19:19:09.10673Z","caller":"traceutil/trace.go:171","msg":"trace[599585563] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1250; }","duration":"233.41238ms","start":"2024-02-29T19:19:08.873292Z","end":"2024-02-29T19:19:09.106704Z","steps":["trace[599585563] 'agreement among raft nodes before linearized reading'  (duration: 225.143646ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-29T19:19:09.111934Z","caller":"traceutil/trace.go:171","msg":"trace[888775499] transaction","detail":"{read_only:false; response_revision:1250; number_of_response:1; }","duration":"519.014227ms","start":"2024-02-29T19:19:08.592902Z","end":"2024-02-29T19:19:09.111917Z","steps":["trace[888775499] 'process raft request'  (duration: 500.153721ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-29T19:19:09.112089Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-02-29T19:19:08.592886Z","time spent":"519.138816ms","remote":"127.0.0.1:57296","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5745,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/minions/default-k8s-diff-port-153528\" mod_revision:1002 > success:<request_put:<key:\"/registry/minions/default-k8s-diff-port-153528\" value_size:5691 >> failure:<request_range:<key:\"/registry/minions/default-k8s-diff-port-153528\" > >"}
	{"level":"warn","ts":"2024-02-29T19:19:09.112117Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"158.930331ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1129"}
	{"level":"info","ts":"2024-02-29T19:19:09.112242Z","caller":"traceutil/trace.go:171","msg":"trace[1633696543] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1250; }","duration":"159.054309ms","start":"2024-02-29T19:19:08.953089Z","end":"2024-02-29T19:19:09.112143Z","steps":["trace[1633696543] 'agreement among raft nodes before linearized reading'  (duration: 158.796474ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-29T19:19:09.2282Z","caller":"traceutil/trace.go:171","msg":"trace[2106790224] transaction","detail":"{read_only:false; response_revision:1251; number_of_response:1; }","duration":"102.628686ms","start":"2024-02-29T19:19:09.125427Z","end":"2024-02-29T19:19:09.228056Z","steps":["trace[2106790224] 'process raft request'  (duration: 101.576686ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-29T19:19:43.996787Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.275179ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-02-29T19:19:43.997125Z","caller":"traceutil/trace.go:171","msg":"trace[814425735] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1278; }","duration":"125.837911ms","start":"2024-02-29T19:19:43.871216Z","end":"2024-02-29T19:19:43.997054Z","steps":["trace[814425735] 'range keys from in-memory index tree'  (duration: 125.055966ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-29T19:19:45.829316Z","caller":"traceutil/trace.go:171","msg":"trace[1264242183] transaction","detail":"{read_only:false; response_revision:1279; number_of_response:1; }","duration":"267.891851ms","start":"2024-02-29T19:19:45.561406Z","end":"2024-02-29T19:19:45.829298Z","steps":["trace[1264242183] 'process raft request'  (duration: 267.754546ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:19:48 up 21 min,  0 users,  load average: 0.70, 0.56, 0.32
	Linux default-k8s-diff-port-153528 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [afb68f5e908cebe734b371147974874028af491c708e5cb5198d1f84d7c1a1ec] <==
	I0229 19:18:25.532196       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0229 19:18:25.676863       1 handler_proxy.go:93] no RequestInfo found in the context
	E0229 19:18:25.677195       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0229 19:18:25.677852       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0229 19:18:26.677771       1 handler_proxy.go:93] no RequestInfo found in the context
	W0229 19:18:26.677771       1 handler_proxy.go:93] no RequestInfo found in the context
	E0229 19:18:26.677959       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0229 19:18:26.677968       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0229 19:18:26.678010       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0229 19:18:26.680131       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0229 19:19:09.115475       1 trace.go:236] Trace[615443956]: "Patch" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:c5e5bcce-1fd0-4341-9558-c451f597db00,client:192.168.39.210,protocol:HTTP/2.0,resource:nodes,scope:resource,url:/api/v1/nodes/default-k8s-diff-port-153528/status,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:PATCH (29-Feb-2024 19:19:08.588) (total time: 527ms):
	Trace[615443956]: ["GuaranteedUpdate etcd3" audit-id:c5e5bcce-1fd0-4341-9558-c451f597db00,key:/minions/default-k8s-diff-port-153528,type:*core.Node,resource:nodes 527ms (19:19:08.588)
	Trace[615443956]:  ---"Txn call completed" 522ms (19:19:09.114)]
	Trace[615443956]: ---"Object stored in database" 523ms (19:19:09.114)
	Trace[615443956]: [527.251606ms] [527.251606ms] END
	I0229 19:19:25.532119       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0229 19:19:26.678264       1 handler_proxy.go:93] no RequestInfo found in the context
	E0229 19:19:26.678310       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0229 19:19:26.678317       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0229 19:19:26.681077       1 handler_proxy.go:93] no RequestInfo found in the context
	E0229 19:19:26.681287       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0229 19:19:26.681339       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [f9076d6488b1c81a82f5384182b068e3cd06217027b2030717efdbf4b6df76a3] <==
	I0229 19:14:11.270148       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 19:14:40.757840       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 19:14:41.279249       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0229 19:15:02.336669       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="356.074µs"
	E0229 19:15:10.765987       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 19:15:11.289410       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0229 19:15:17.335776       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="161.904µs"
	E0229 19:15:40.772950       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 19:15:41.299193       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 19:16:10.781429       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 19:16:11.308122       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 19:16:40.787797       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 19:16:41.321980       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 19:17:10.794767       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 19:17:11.334917       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 19:17:40.803471       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 19:17:41.343929       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 19:18:10.809172       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 19:18:11.360100       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 19:18:40.822298       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 19:18:41.374851       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 19:19:10.828764       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 19:19:11.388811       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 19:19:40.839297       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 19:19:41.400004       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [66a474fccaab4249c196a12339ffecf4b3a4a079d53db11cc6f1b249574da09f] <==
	I0229 19:03:42.067048       1 server_others.go:69] "Using iptables proxy"
	I0229 19:03:42.086992       1 node.go:141] Successfully retrieved node IP: 192.168.39.210
	I0229 19:03:42.159694       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0229 19:03:42.159744       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0229 19:03:42.174504       1 server_others.go:152] "Using iptables Proxier"
	I0229 19:03:42.174692       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0229 19:03:42.174926       1 server.go:846] "Version info" version="v1.28.4"
	I0229 19:03:42.174936       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0229 19:03:42.184252       1 config.go:188] "Starting service config controller"
	I0229 19:03:42.184266       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0229 19:03:42.184375       1 config.go:97] "Starting endpoint slice config controller"
	I0229 19:03:42.184380       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0229 19:03:42.197882       1 config.go:315] "Starting node config controller"
	I0229 19:03:42.197970       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0229 19:03:42.286749       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0229 19:03:42.286792       1 shared_informer.go:318] Caches are synced for service config
	I0229 19:03:42.301431       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [7ad8f5f1b340cedf651a9032d3eb4e30640730f4284611f933e96f4ecf76ecff] <==
	W0229 19:03:26.600990       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0229 19:03:26.601061       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0229 19:03:26.660906       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0229 19:03:26.661039       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0229 19:03:26.724353       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0229 19:03:26.724410       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0229 19:03:26.752810       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0229 19:03:26.752890       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0229 19:03:26.753042       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0229 19:03:26.753092       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0229 19:03:26.781010       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0229 19:03:26.781062       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0229 19:03:26.783731       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0229 19:03:26.784212       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0229 19:03:26.896147       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0229 19:03:26.896203       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0229 19:03:26.924356       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0229 19:03:26.924509       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0229 19:03:26.949294       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0229 19:03:26.949348       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0229 19:03:26.952405       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0229 19:03:26.952455       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0229 19:03:26.954305       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0229 19:03:26.954350       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0229 19:03:29.272066       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 29 19:17:29 default-k8s-diff-port-153528 kubelet[3705]: E0229 19:17:29.415317    3705 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 29 19:17:29 default-k8s-diff-port-153528 kubelet[3705]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 29 19:17:29 default-k8s-diff-port-153528 kubelet[3705]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 29 19:17:29 default-k8s-diff-port-153528 kubelet[3705]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 29 19:17:29 default-k8s-diff-port-153528 kubelet[3705]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 29 19:17:33 default-k8s-diff-port-153528 kubelet[3705]: E0229 19:17:33.317095    3705 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-v95ws" podUID="e3545189-e705-4d6e-bab6-e1eceba83c0f"
	Feb 29 19:17:48 default-k8s-diff-port-153528 kubelet[3705]: E0229 19:17:48.317076    3705 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-v95ws" podUID="e3545189-e705-4d6e-bab6-e1eceba83c0f"
	Feb 29 19:18:02 default-k8s-diff-port-153528 kubelet[3705]: E0229 19:18:02.317722    3705 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-v95ws" podUID="e3545189-e705-4d6e-bab6-e1eceba83c0f"
	Feb 29 19:18:17 default-k8s-diff-port-153528 kubelet[3705]: E0229 19:18:17.316422    3705 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-v95ws" podUID="e3545189-e705-4d6e-bab6-e1eceba83c0f"
	Feb 29 19:18:29 default-k8s-diff-port-153528 kubelet[3705]: E0229 19:18:29.417118    3705 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 29 19:18:29 default-k8s-diff-port-153528 kubelet[3705]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 29 19:18:29 default-k8s-diff-port-153528 kubelet[3705]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 29 19:18:29 default-k8s-diff-port-153528 kubelet[3705]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 29 19:18:29 default-k8s-diff-port-153528 kubelet[3705]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 29 19:18:31 default-k8s-diff-port-153528 kubelet[3705]: E0229 19:18:31.316414    3705 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-v95ws" podUID="e3545189-e705-4d6e-bab6-e1eceba83c0f"
	Feb 29 19:18:45 default-k8s-diff-port-153528 kubelet[3705]: E0229 19:18:45.316062    3705 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-v95ws" podUID="e3545189-e705-4d6e-bab6-e1eceba83c0f"
	Feb 29 19:19:00 default-k8s-diff-port-153528 kubelet[3705]: E0229 19:19:00.316688    3705 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-v95ws" podUID="e3545189-e705-4d6e-bab6-e1eceba83c0f"
	Feb 29 19:19:15 default-k8s-diff-port-153528 kubelet[3705]: E0229 19:19:15.317707    3705 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-v95ws" podUID="e3545189-e705-4d6e-bab6-e1eceba83c0f"
	Feb 29 19:19:28 default-k8s-diff-port-153528 kubelet[3705]: E0229 19:19:28.317027    3705 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-v95ws" podUID="e3545189-e705-4d6e-bab6-e1eceba83c0f"
	Feb 29 19:19:29 default-k8s-diff-port-153528 kubelet[3705]: E0229 19:19:29.412132    3705 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 29 19:19:29 default-k8s-diff-port-153528 kubelet[3705]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 29 19:19:29 default-k8s-diff-port-153528 kubelet[3705]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 29 19:19:29 default-k8s-diff-port-153528 kubelet[3705]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 29 19:19:29 default-k8s-diff-port-153528 kubelet[3705]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 29 19:19:42 default-k8s-diff-port-153528 kubelet[3705]: E0229 19:19:42.316295    3705 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-v95ws" podUID="e3545189-e705-4d6e-bab6-e1eceba83c0f"
	
	
	==> storage-provisioner [dd100a6a78ff38195f8c50e042b5378c2cd6e061b69c596fd75b45030c7abb3f] <==
	I0229 19:03:45.174847       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0229 19:03:45.186089       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0229 19:03:45.186163       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0229 19:03:45.200030       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0229 19:03:45.200498       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-153528_ddfb39b0-3f56-44c1-9c0e-69ce7f38107d!
	I0229 19:03:45.201320       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f1708b3d-d235-4b3f-984d-84b1219f20cb", APIVersion:"v1", ResourceVersion:"461", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-153528_ddfb39b0-3f56-44c1-9c0e-69ce7f38107d became leader
	I0229 19:03:45.301708       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-153528_ddfb39b0-3f56-44c1-9c0e-69ce7f38107d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-153528 -n default-k8s-diff-port-153528
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-153528 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-v95ws
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-153528 describe pod metrics-server-57f55c9bc5-v95ws
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-153528 describe pod metrics-server-57f55c9bc5-v95ws: exit status 1 (85.332197ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-v95ws" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-153528 describe pod metrics-server-57f55c9bc5-v95ws: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (168.66s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (12.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0229 19:17:43.785700   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/addons-848237/client.crt: no such file or directory
E0229 19:17:46.663347   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/functional-531072/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-247197 -n no-preload-247197
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-02-29 19:17:48.622259615 +0000 UTC m=+6026.117243393
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-247197 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-247197 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.836µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-247197 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-247197 -n no-preload-247197
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-247197 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-247197 logs -n 25: (1.350268892s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-247197                                   | no-preload-247197            | jenkins | v1.32.0 | 29 Feb 24 18:47 UTC | 29 Feb 24 18:49 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p pause-848791                                        | pause-848791                 | jenkins | v1.32.0 | 29 Feb 24 18:48 UTC | 29 Feb 24 18:48 UTC |
	| start   | -p embed-certs-991128                                  | embed-certs-991128           | jenkins | v1.32.0 | 29 Feb 24 18:48 UTC | 29 Feb 24 18:49 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-393248                              | cert-expiration-393248       | jenkins | v1.32.0 | 29 Feb 24 18:49 UTC | 29 Feb 24 18:49 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-393248                              | cert-expiration-393248       | jenkins | v1.32.0 | 29 Feb 24 18:49 UTC | 29 Feb 24 18:49 UTC |
	| delete  | -p                                                     | disable-driver-mounts-599421 | jenkins | v1.32.0 | 29 Feb 24 18:49 UTC | 29 Feb 24 18:49 UTC |
	|         | disable-driver-mounts-599421                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-153528 | jenkins | v1.32.0 | 29 Feb 24 18:49 UTC | 29 Feb 24 18:50 UTC |
	|         | default-k8s-diff-port-153528                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-247197             | no-preload-247197            | jenkins | v1.32.0 | 29 Feb 24 18:49 UTC | 29 Feb 24 18:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-247197                                   | no-preload-247197            | jenkins | v1.32.0 | 29 Feb 24 18:49 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-991128            | embed-certs-991128           | jenkins | v1.32.0 | 29 Feb 24 18:50 UTC | 29 Feb 24 18:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-991128                                  | embed-certs-991128           | jenkins | v1.32.0 | 29 Feb 24 18:50 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-153528  | default-k8s-diff-port-153528 | jenkins | v1.32.0 | 29 Feb 24 18:51 UTC | 29 Feb 24 18:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-153528 | jenkins | v1.32.0 | 29 Feb 24 18:51 UTC |                     |
	|         | default-k8s-diff-port-153528                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-631080        | old-k8s-version-631080       | jenkins | v1.32.0 | 29 Feb 24 18:51 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-247197                  | no-preload-247197            | jenkins | v1.32.0 | 29 Feb 24 18:52 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-991128                 | embed-certs-991128           | jenkins | v1.32.0 | 29 Feb 24 18:52 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-247197                                   | no-preload-247197            | jenkins | v1.32.0 | 29 Feb 24 18:52 UTC | 29 Feb 24 19:08 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| start   | -p embed-certs-991128                                  | embed-certs-991128           | jenkins | v1.32.0 | 29 Feb 24 18:52 UTC | 29 Feb 24 19:07 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-631080                              | old-k8s-version-631080       | jenkins | v1.32.0 | 29 Feb 24 18:53 UTC | 29 Feb 24 18:53 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-631080             | old-k8s-version-631080       | jenkins | v1.32.0 | 29 Feb 24 18:53 UTC | 29 Feb 24 18:53 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-631080                              | old-k8s-version-631080       | jenkins | v1.32.0 | 29 Feb 24 18:53 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-153528       | default-k8s-diff-port-153528 | jenkins | v1.32.0 | 29 Feb 24 18:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-153528 | jenkins | v1.32.0 | 29 Feb 24 18:53 UTC | 29 Feb 24 19:07 UTC |
	|         | default-k8s-diff-port-153528                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-631080                              | old-k8s-version-631080       | jenkins | v1.32.0 | 29 Feb 24 19:16 UTC | 29 Feb 24 19:16 UTC |
	| start   | -p newest-cni-130594 --memory=2200 --alsologtostderr   | newest-cni-130594            | jenkins | v1.32.0 | 29 Feb 24 19:16 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 19:16:58
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 19:16:58.995744   52590 out.go:291] Setting OutFile to fd 1 ...
	I0229 19:16:58.996307   52590 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 19:16:58.996327   52590 out.go:304] Setting ErrFile to fd 2...
	I0229 19:16:58.996334   52590 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 19:16:58.996770   52590 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6428/.minikube/bin
	I0229 19:16:58.997864   52590 out.go:298] Setting JSON to false
	I0229 19:16:58.998729   52590 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7163,"bootTime":1709227056,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 19:16:58.998808   52590 start.go:139] virtualization: kvm guest
	I0229 19:16:59.000879   52590 out.go:177] * [newest-cni-130594] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 19:16:59.002082   52590 notify.go:220] Checking for updates...
	I0229 19:16:59.002109   52590 out.go:177]   - MINIKUBE_LOCATION=18259
	I0229 19:16:59.003375   52590 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 19:16:59.004595   52590 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18259-6428/kubeconfig
	I0229 19:16:59.005809   52590 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6428/.minikube
	I0229 19:16:59.007062   52590 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 19:16:59.008233   52590 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 19:16:59.009849   52590 config.go:182] Loaded profile config "default-k8s-diff-port-153528": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 19:16:59.009988   52590 config.go:182] Loaded profile config "embed-certs-991128": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 19:16:59.010089   52590 config.go:182] Loaded profile config "no-preload-247197": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0229 19:16:59.010161   52590 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 19:16:59.047718   52590 out.go:177] * Using the kvm2 driver based on user configuration
	I0229 19:16:59.048992   52590 start.go:299] selected driver: kvm2
	I0229 19:16:59.049008   52590 start.go:903] validating driver "kvm2" against <nil>
	I0229 19:16:59.049034   52590 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 19:16:59.050133   52590 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 19:16:59.050237   52590 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18259-6428/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 19:16:59.068920   52590 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 19:16:59.068976   52590 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	W0229 19:16:59.069020   52590 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0229 19:16:59.069294   52590 start_flags.go:950] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0229 19:16:59.069385   52590 cni.go:84] Creating CNI manager for ""
	I0229 19:16:59.069402   52590 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 19:16:59.069416   52590 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0229 19:16:59.069433   52590 start_flags.go:323] config:
	{Name:newest-cni-130594 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-130594 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 19:16:59.069616   52590 iso.go:125] acquiring lock: {Name:mk4b55cee5696fff78a1389276e4e011ad655e56 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 19:16:59.071805   52590 out.go:177] * Starting control plane node newest-cni-130594 in cluster newest-cni-130594
	I0229 19:16:59.073149   52590 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0229 19:16:59.073183   52590 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18259-6428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0229 19:16:59.073192   52590 cache.go:56] Caching tarball of preloaded images
	I0229 19:16:59.073274   52590 preload.go:174] Found /home/jenkins/minikube-integration/18259-6428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0229 19:16:59.073285   52590 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on crio
	I0229 19:16:59.073365   52590 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/newest-cni-130594/config.json ...
	I0229 19:16:59.073384   52590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/newest-cni-130594/config.json: {Name:mk3c0011dbfa18187928c8536e3b0cff4d138ff1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:16:59.073506   52590 start.go:365] acquiring machines lock for newest-cni-130594: {Name:mk08b242c6b6b7cd4a6843a7d3821a8b435540dd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 19:16:59.073533   52590 start.go:369] acquired machines lock for "newest-cni-130594" in 14.85µs
	I0229 19:16:59.073548   52590 start.go:93] Provisioning new machine with config: &{Name:newest-cni-130594 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-130594 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0229 19:16:59.073614   52590 start.go:125] createHost starting for "" (driver="kvm2")
	I0229 19:16:59.075245   52590 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0229 19:16:59.075385   52590 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 19:16:59.075425   52590 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 19:16:59.090514   52590 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39649
	I0229 19:16:59.091034   52590 main.go:141] libmachine: () Calling .GetVersion
	I0229 19:16:59.091647   52590 main.go:141] libmachine: Using API Version  1
	I0229 19:16:59.091667   52590 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 19:16:59.091980   52590 main.go:141] libmachine: () Calling .GetMachineName
	I0229 19:16:59.092186   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetMachineName
	I0229 19:16:59.092385   52590 main.go:141] libmachine: (newest-cni-130594) Calling .DriverName
	I0229 19:16:59.092555   52590 start.go:159] libmachine.API.Create for "newest-cni-130594" (driver="kvm2")
	I0229 19:16:59.092596   52590 client.go:168] LocalClient.Create starting
	I0229 19:16:59.092645   52590 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem
	I0229 19:16:59.092709   52590 main.go:141] libmachine: Decoding PEM data...
	I0229 19:16:59.092739   52590 main.go:141] libmachine: Parsing certificate...
	I0229 19:16:59.092824   52590 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem
	I0229 19:16:59.092856   52590 main.go:141] libmachine: Decoding PEM data...
	I0229 19:16:59.092878   52590 main.go:141] libmachine: Parsing certificate...
	I0229 19:16:59.092907   52590 main.go:141] libmachine: Running pre-create checks...
	I0229 19:16:59.092928   52590 main.go:141] libmachine: (newest-cni-130594) Calling .PreCreateCheck
	I0229 19:16:59.093276   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetConfigRaw
	I0229 19:16:59.093734   52590 main.go:141] libmachine: Creating machine...
	I0229 19:16:59.093752   52590 main.go:141] libmachine: (newest-cni-130594) Calling .Create
	I0229 19:16:59.093910   52590 main.go:141] libmachine: (newest-cni-130594) Creating KVM machine...
	I0229 19:16:59.095186   52590 main.go:141] libmachine: (newest-cni-130594) DBG | found existing default KVM network
	I0229 19:16:59.096522   52590 main.go:141] libmachine: (newest-cni-130594) DBG | I0229 19:16:59.096383   52618 network.go:212] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:3f:fc:f9} reservation:<nil>}
	I0229 19:16:59.097274   52590 main.go:141] libmachine: (newest-cni-130594) DBG | I0229 19:16:59.097217   52618 network.go:212] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:24:6b:02} reservation:<nil>}
	I0229 19:16:59.098193   52590 main.go:141] libmachine: (newest-cni-130594) DBG | I0229 19:16:59.098146   52618 network.go:212] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:5b:27:eb} reservation:<nil>}
	I0229 19:16:59.099322   52590 main.go:141] libmachine: (newest-cni-130594) DBG | I0229 19:16:59.099249   52618 network.go:207] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000384fb0}
	I0229 19:16:59.104597   52590 main.go:141] libmachine: (newest-cni-130594) DBG | trying to create private KVM network mk-newest-cni-130594 192.168.72.0/24...
	I0229 19:16:59.195393   52590 main.go:141] libmachine: (newest-cni-130594) DBG | private KVM network mk-newest-cni-130594 192.168.72.0/24 created
	I0229 19:16:59.195451   52590 main.go:141] libmachine: (newest-cni-130594) DBG | I0229 19:16:59.195370   52618 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18259-6428/.minikube
	I0229 19:16:59.195477   52590 main.go:141] libmachine: (newest-cni-130594) Setting up store path in /home/jenkins/minikube-integration/18259-6428/.minikube/machines/newest-cni-130594 ...
	I0229 19:16:59.195492   52590 main.go:141] libmachine: (newest-cni-130594) Building disk image from file:///home/jenkins/minikube-integration/18259-6428/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso
	I0229 19:16:59.195532   52590 main.go:141] libmachine: (newest-cni-130594) Downloading /home/jenkins/minikube-integration/18259-6428/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18259-6428/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0229 19:16:59.427706   52590 main.go:141] libmachine: (newest-cni-130594) DBG | I0229 19:16:59.427577   52618 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/newest-cni-130594/id_rsa...
	I0229 19:16:59.780165   52590 main.go:141] libmachine: (newest-cni-130594) DBG | I0229 19:16:59.780037   52618 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/newest-cni-130594/newest-cni-130594.rawdisk...
	I0229 19:16:59.780192   52590 main.go:141] libmachine: (newest-cni-130594) DBG | Writing magic tar header
	I0229 19:16:59.780210   52590 main.go:141] libmachine: (newest-cni-130594) DBG | Writing SSH key tar header
	I0229 19:16:59.780230   52590 main.go:141] libmachine: (newest-cni-130594) DBG | I0229 19:16:59.780169   52618 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18259-6428/.minikube/machines/newest-cni-130594 ...
	I0229 19:16:59.780333   52590 main.go:141] libmachine: (newest-cni-130594) Setting executable bit set on /home/jenkins/minikube-integration/18259-6428/.minikube/machines/newest-cni-130594 (perms=drwx------)
	I0229 19:16:59.780352   52590 main.go:141] libmachine: (newest-cni-130594) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/newest-cni-130594
	I0229 19:16:59.780360   52590 main.go:141] libmachine: (newest-cni-130594) Setting executable bit set on /home/jenkins/minikube-integration/18259-6428/.minikube/machines (perms=drwxr-xr-x)
	I0229 19:16:59.780376   52590 main.go:141] libmachine: (newest-cni-130594) Setting executable bit set on /home/jenkins/minikube-integration/18259-6428/.minikube (perms=drwxr-xr-x)
	I0229 19:16:59.780391   52590 main.go:141] libmachine: (newest-cni-130594) Setting executable bit set on /home/jenkins/minikube-integration/18259-6428 (perms=drwxrwxr-x)
	I0229 19:16:59.780412   52590 main.go:141] libmachine: (newest-cni-130594) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6428/.minikube/machines
	I0229 19:16:59.780425   52590 main.go:141] libmachine: (newest-cni-130594) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0229 19:16:59.780440   52590 main.go:141] libmachine: (newest-cni-130594) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6428/.minikube
	I0229 19:16:59.780449   52590 main.go:141] libmachine: (newest-cni-130594) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0229 19:16:59.780465   52590 main.go:141] libmachine: (newest-cni-130594) Creating domain...
	I0229 19:16:59.780530   52590 main.go:141] libmachine: (newest-cni-130594) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6428
	I0229 19:16:59.780560   52590 main.go:141] libmachine: (newest-cni-130594) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0229 19:16:59.780572   52590 main.go:141] libmachine: (newest-cni-130594) DBG | Checking permissions on dir: /home/jenkins
	I0229 19:16:59.780581   52590 main.go:141] libmachine: (newest-cni-130594) DBG | Checking permissions on dir: /home
	I0229 19:16:59.780589   52590 main.go:141] libmachine: (newest-cni-130594) DBG | Skipping /home - not owner
	I0229 19:16:59.781516   52590 main.go:141] libmachine: (newest-cni-130594) define libvirt domain using xml: 
	I0229 19:16:59.781538   52590 main.go:141] libmachine: (newest-cni-130594) <domain type='kvm'>
	I0229 19:16:59.781563   52590 main.go:141] libmachine: (newest-cni-130594)   <name>newest-cni-130594</name>
	I0229 19:16:59.781596   52590 main.go:141] libmachine: (newest-cni-130594)   <memory unit='MiB'>2200</memory>
	I0229 19:16:59.781610   52590 main.go:141] libmachine: (newest-cni-130594)   <vcpu>2</vcpu>
	I0229 19:16:59.781620   52590 main.go:141] libmachine: (newest-cni-130594)   <features>
	I0229 19:16:59.781629   52590 main.go:141] libmachine: (newest-cni-130594)     <acpi/>
	I0229 19:16:59.781640   52590 main.go:141] libmachine: (newest-cni-130594)     <apic/>
	I0229 19:16:59.781677   52590 main.go:141] libmachine: (newest-cni-130594)     <pae/>
	I0229 19:16:59.781698   52590 main.go:141] libmachine: (newest-cni-130594)     
	I0229 19:16:59.781724   52590 main.go:141] libmachine: (newest-cni-130594)   </features>
	I0229 19:16:59.781743   52590 main.go:141] libmachine: (newest-cni-130594)   <cpu mode='host-passthrough'>
	I0229 19:16:59.781755   52590 main.go:141] libmachine: (newest-cni-130594)   
	I0229 19:16:59.781771   52590 main.go:141] libmachine: (newest-cni-130594)   </cpu>
	I0229 19:16:59.781790   52590 main.go:141] libmachine: (newest-cni-130594)   <os>
	I0229 19:16:59.781823   52590 main.go:141] libmachine: (newest-cni-130594)     <type>hvm</type>
	I0229 19:16:59.781836   52590 main.go:141] libmachine: (newest-cni-130594)     <boot dev='cdrom'/>
	I0229 19:16:59.781844   52590 main.go:141] libmachine: (newest-cni-130594)     <boot dev='hd'/>
	I0229 19:16:59.781856   52590 main.go:141] libmachine: (newest-cni-130594)     <bootmenu enable='no'/>
	I0229 19:16:59.781865   52590 main.go:141] libmachine: (newest-cni-130594)   </os>
	I0229 19:16:59.781876   52590 main.go:141] libmachine: (newest-cni-130594)   <devices>
	I0229 19:16:59.781884   52590 main.go:141] libmachine: (newest-cni-130594)     <disk type='file' device='cdrom'>
	I0229 19:16:59.781897   52590 main.go:141] libmachine: (newest-cni-130594)       <source file='/home/jenkins/minikube-integration/18259-6428/.minikube/machines/newest-cni-130594/boot2docker.iso'/>
	I0229 19:16:59.781908   52590 main.go:141] libmachine: (newest-cni-130594)       <target dev='hdc' bus='scsi'/>
	I0229 19:16:59.781927   52590 main.go:141] libmachine: (newest-cni-130594)       <readonly/>
	I0229 19:16:59.781945   52590 main.go:141] libmachine: (newest-cni-130594)     </disk>
	I0229 19:16:59.781955   52590 main.go:141] libmachine: (newest-cni-130594)     <disk type='file' device='disk'>
	I0229 19:16:59.781979   52590 main.go:141] libmachine: (newest-cni-130594)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0229 19:16:59.781997   52590 main.go:141] libmachine: (newest-cni-130594)       <source file='/home/jenkins/minikube-integration/18259-6428/.minikube/machines/newest-cni-130594/newest-cni-130594.rawdisk'/>
	I0229 19:16:59.782010   52590 main.go:141] libmachine: (newest-cni-130594)       <target dev='hda' bus='virtio'/>
	I0229 19:16:59.782018   52590 main.go:141] libmachine: (newest-cni-130594)     </disk>
	I0229 19:16:59.782033   52590 main.go:141] libmachine: (newest-cni-130594)     <interface type='network'>
	I0229 19:16:59.782059   52590 main.go:141] libmachine: (newest-cni-130594)       <source network='mk-newest-cni-130594'/>
	I0229 19:16:59.782085   52590 main.go:141] libmachine: (newest-cni-130594)       <model type='virtio'/>
	I0229 19:16:59.782098   52590 main.go:141] libmachine: (newest-cni-130594)     </interface>
	I0229 19:16:59.782108   52590 main.go:141] libmachine: (newest-cni-130594)     <interface type='network'>
	I0229 19:16:59.782122   52590 main.go:141] libmachine: (newest-cni-130594)       <source network='default'/>
	I0229 19:16:59.782134   52590 main.go:141] libmachine: (newest-cni-130594)       <model type='virtio'/>
	I0229 19:16:59.782160   52590 main.go:141] libmachine: (newest-cni-130594)     </interface>
	I0229 19:16:59.782182   52590 main.go:141] libmachine: (newest-cni-130594)     <serial type='pty'>
	I0229 19:16:59.782209   52590 main.go:141] libmachine: (newest-cni-130594)       <target port='0'/>
	I0229 19:16:59.782251   52590 main.go:141] libmachine: (newest-cni-130594)     </serial>
	I0229 19:16:59.782270   52590 main.go:141] libmachine: (newest-cni-130594)     <console type='pty'>
	I0229 19:16:59.782281   52590 main.go:141] libmachine: (newest-cni-130594)       <target type='serial' port='0'/>
	I0229 19:16:59.782289   52590 main.go:141] libmachine: (newest-cni-130594)     </console>
	I0229 19:16:59.782296   52590 main.go:141] libmachine: (newest-cni-130594)     <rng model='virtio'>
	I0229 19:16:59.782306   52590 main.go:141] libmachine: (newest-cni-130594)       <backend model='random'>/dev/random</backend>
	I0229 19:16:59.782318   52590 main.go:141] libmachine: (newest-cni-130594)     </rng>
	I0229 19:16:59.782327   52590 main.go:141] libmachine: (newest-cni-130594)     
	I0229 19:16:59.782334   52590 main.go:141] libmachine: (newest-cni-130594)     
	I0229 19:16:59.782344   52590 main.go:141] libmachine: (newest-cni-130594)   </devices>
	I0229 19:16:59.782355   52590 main.go:141] libmachine: (newest-cni-130594) </domain>
	I0229 19:16:59.782361   52590 main.go:141] libmachine: (newest-cni-130594) 
	I0229 19:16:59.786697   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined MAC address 52:54:00:8d:17:19 in network default
	I0229 19:16:59.787322   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:16:59.787348   52590 main.go:141] libmachine: (newest-cni-130594) Ensuring networks are active...
	I0229 19:16:59.788077   52590 main.go:141] libmachine: (newest-cni-130594) Ensuring network default is active
	I0229 19:16:59.788520   52590 main.go:141] libmachine: (newest-cni-130594) Ensuring network mk-newest-cni-130594 is active
	I0229 19:16:59.789113   52590 main.go:141] libmachine: (newest-cni-130594) Getting domain xml...
	I0229 19:16:59.789804   52590 main.go:141] libmachine: (newest-cni-130594) Creating domain...
	I0229 19:17:01.129183   52590 main.go:141] libmachine: (newest-cni-130594) Waiting to get IP...
	I0229 19:17:01.130118   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:01.130599   52590 main.go:141] libmachine: (newest-cni-130594) DBG | unable to find current IP address of domain newest-cni-130594 in network mk-newest-cni-130594
	I0229 19:17:01.130641   52590 main.go:141] libmachine: (newest-cni-130594) DBG | I0229 19:17:01.130578   52618 retry.go:31] will retry after 303.868776ms: waiting for machine to come up
	I0229 19:17:01.436101   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:01.436703   52590 main.go:141] libmachine: (newest-cni-130594) DBG | unable to find current IP address of domain newest-cni-130594 in network mk-newest-cni-130594
	I0229 19:17:01.436733   52590 main.go:141] libmachine: (newest-cni-130594) DBG | I0229 19:17:01.436654   52618 retry.go:31] will retry after 299.644815ms: waiting for machine to come up
	I0229 19:17:01.738274   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:01.738742   52590 main.go:141] libmachine: (newest-cni-130594) DBG | unable to find current IP address of domain newest-cni-130594 in network mk-newest-cni-130594
	I0229 19:17:01.738769   52590 main.go:141] libmachine: (newest-cni-130594) DBG | I0229 19:17:01.738705   52618 retry.go:31] will retry after 364.815241ms: waiting for machine to come up
	I0229 19:17:02.105155   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:02.105626   52590 main.go:141] libmachine: (newest-cni-130594) DBG | unable to find current IP address of domain newest-cni-130594 in network mk-newest-cni-130594
	I0229 19:17:02.105655   52590 main.go:141] libmachine: (newest-cni-130594) DBG | I0229 19:17:02.105576   52618 retry.go:31] will retry after 484.317766ms: waiting for machine to come up
	I0229 19:17:02.591110   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:02.591531   52590 main.go:141] libmachine: (newest-cni-130594) DBG | unable to find current IP address of domain newest-cni-130594 in network mk-newest-cni-130594
	I0229 19:17:02.591559   52590 main.go:141] libmachine: (newest-cni-130594) DBG | I0229 19:17:02.591477   52618 retry.go:31] will retry after 698.688666ms: waiting for machine to come up
	I0229 19:17:03.291933   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:03.292509   52590 main.go:141] libmachine: (newest-cni-130594) DBG | unable to find current IP address of domain newest-cni-130594 in network mk-newest-cni-130594
	I0229 19:17:03.292537   52590 main.go:141] libmachine: (newest-cni-130594) DBG | I0229 19:17:03.292463   52618 retry.go:31] will retry after 779.864202ms: waiting for machine to come up
	I0229 19:17:04.074373   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:04.074800   52590 main.go:141] libmachine: (newest-cni-130594) DBG | unable to find current IP address of domain newest-cni-130594 in network mk-newest-cni-130594
	I0229 19:17:04.074831   52590 main.go:141] libmachine: (newest-cni-130594) DBG | I0229 19:17:04.074747   52618 retry.go:31] will retry after 946.144699ms: waiting for machine to come up
	I0229 19:17:05.022155   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:05.022628   52590 main.go:141] libmachine: (newest-cni-130594) DBG | unable to find current IP address of domain newest-cni-130594 in network mk-newest-cni-130594
	I0229 19:17:05.022655   52590 main.go:141] libmachine: (newest-cni-130594) DBG | I0229 19:17:05.022575   52618 retry.go:31] will retry after 1.080490095s: waiting for machine to come up
	I0229 19:17:06.104781   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:06.105246   52590 main.go:141] libmachine: (newest-cni-130594) DBG | unable to find current IP address of domain newest-cni-130594 in network mk-newest-cni-130594
	I0229 19:17:06.105269   52590 main.go:141] libmachine: (newest-cni-130594) DBG | I0229 19:17:06.105175   52618 retry.go:31] will retry after 1.547469431s: waiting for machine to come up
	I0229 19:17:07.654746   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:07.655214   52590 main.go:141] libmachine: (newest-cni-130594) DBG | unable to find current IP address of domain newest-cni-130594 in network mk-newest-cni-130594
	I0229 19:17:07.655242   52590 main.go:141] libmachine: (newest-cni-130594) DBG | I0229 19:17:07.655165   52618 retry.go:31] will retry after 1.69867016s: waiting for machine to come up
	I0229 19:17:09.355971   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:09.356493   52590 main.go:141] libmachine: (newest-cni-130594) DBG | unable to find current IP address of domain newest-cni-130594 in network mk-newest-cni-130594
	I0229 19:17:09.356522   52590 main.go:141] libmachine: (newest-cni-130594) DBG | I0229 19:17:09.356445   52618 retry.go:31] will retry after 2.383457338s: waiting for machine to come up
	I0229 19:17:11.741351   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:11.741845   52590 main.go:141] libmachine: (newest-cni-130594) DBG | unable to find current IP address of domain newest-cni-130594 in network mk-newest-cni-130594
	I0229 19:17:11.741867   52590 main.go:141] libmachine: (newest-cni-130594) DBG | I0229 19:17:11.741798   52618 retry.go:31] will retry after 2.907806637s: waiting for machine to come up
	I0229 19:17:14.651011   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:14.651492   52590 main.go:141] libmachine: (newest-cni-130594) DBG | unable to find current IP address of domain newest-cni-130594 in network mk-newest-cni-130594
	I0229 19:17:14.651541   52590 main.go:141] libmachine: (newest-cni-130594) DBG | I0229 19:17:14.651438   52618 retry.go:31] will retry after 3.634634613s: waiting for machine to come up
	I0229 19:17:18.288159   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:18.288691   52590 main.go:141] libmachine: (newest-cni-130594) DBG | unable to find current IP address of domain newest-cni-130594 in network mk-newest-cni-130594
	I0229 19:17:18.288719   52590 main.go:141] libmachine: (newest-cni-130594) DBG | I0229 19:17:18.288626   52618 retry.go:31] will retry after 5.271835381s: waiting for machine to come up
	I0229 19:17:23.564046   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:23.564521   52590 main.go:141] libmachine: (newest-cni-130594) Found IP for machine: 192.168.72.67
	I0229 19:17:23.564542   52590 main.go:141] libmachine: (newest-cni-130594) Reserving static IP address...
	I0229 19:17:23.564572   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has current primary IP address 192.168.72.67 and MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:23.564899   52590 main.go:141] libmachine: (newest-cni-130594) DBG | unable to find host DHCP lease matching {name: "newest-cni-130594", mac: "52:54:00:cd:4c:af", ip: "192.168.72.67"} in network mk-newest-cni-130594
	I0229 19:17:23.639032   52590 main.go:141] libmachine: (newest-cni-130594) DBG | Getting to WaitForSSH function...
	I0229 19:17:23.639059   52590 main.go:141] libmachine: (newest-cni-130594) Reserved static IP address: 192.168.72.67
	I0229 19:17:23.639073   52590 main.go:141] libmachine: (newest-cni-130594) Waiting for SSH to be available...
	I0229 19:17:23.641664   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:23.642087   52590 main.go:141] libmachine: (newest-cni-130594) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:4c:af", ip: ""} in network mk-newest-cni-130594: {Iface:virbr4 ExpiryTime:2024-02-29 20:17:15 +0000 UTC Type:0 Mac:52:54:00:cd:4c:af Iaid: IPaddr:192.168.72.67 Prefix:24 Hostname:minikube Clientid:01:52:54:00:cd:4c:af}
	I0229 19:17:23.642112   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined IP address 192.168.72.67 and MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:23.642267   52590 main.go:141] libmachine: (newest-cni-130594) DBG | Using SSH client type: external
	I0229 19:17:23.642295   52590 main.go:141] libmachine: (newest-cni-130594) DBG | Using SSH private key: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/newest-cni-130594/id_rsa (-rw-------)
	I0229 19:17:23.642322   52590 main.go:141] libmachine: (newest-cni-130594) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.67 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18259-6428/.minikube/machines/newest-cni-130594/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 19:17:23.642344   52590 main.go:141] libmachine: (newest-cni-130594) DBG | About to run SSH command:
	I0229 19:17:23.642359   52590 main.go:141] libmachine: (newest-cni-130594) DBG | exit 0
	I0229 19:17:23.768306   52590 main.go:141] libmachine: (newest-cni-130594) DBG | SSH cmd err, output: <nil>: 
	I0229 19:17:23.768620   52590 main.go:141] libmachine: (newest-cni-130594) KVM machine creation complete!
	I0229 19:17:23.769129   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetConfigRaw
	I0229 19:17:23.769657   52590 main.go:141] libmachine: (newest-cni-130594) Calling .DriverName
	I0229 19:17:23.769832   52590 main.go:141] libmachine: (newest-cni-130594) Calling .DriverName
	I0229 19:17:23.769921   52590 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0229 19:17:23.769932   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetState
	I0229 19:17:23.771299   52590 main.go:141] libmachine: Detecting operating system of created instance...
	I0229 19:17:23.771314   52590 main.go:141] libmachine: Waiting for SSH to be available...
	I0229 19:17:23.771320   52590 main.go:141] libmachine: Getting to WaitForSSH function...
	I0229 19:17:23.771325   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHHostname
	I0229 19:17:23.773705   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:23.774105   52590 main.go:141] libmachine: (newest-cni-130594) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:4c:af", ip: ""} in network mk-newest-cni-130594: {Iface:virbr4 ExpiryTime:2024-02-29 20:17:15 +0000 UTC Type:0 Mac:52:54:00:cd:4c:af Iaid: IPaddr:192.168.72.67 Prefix:24 Hostname:newest-cni-130594 Clientid:01:52:54:00:cd:4c:af}
	I0229 19:17:23.774128   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined IP address 192.168.72.67 and MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:23.774268   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHPort
	I0229 19:17:23.774437   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHKeyPath
	I0229 19:17:23.774609   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHKeyPath
	I0229 19:17:23.774746   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHUsername
	I0229 19:17:23.774892   52590 main.go:141] libmachine: Using SSH client type: native
	I0229 19:17:23.775162   52590 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.67 22 <nil> <nil>}
	I0229 19:17:23.775179   52590 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0229 19:17:23.882603   52590 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 19:17:23.882629   52590 main.go:141] libmachine: Detecting the provisioner...
	I0229 19:17:23.882639   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHHostname
	I0229 19:17:23.885998   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:23.886406   52590 main.go:141] libmachine: (newest-cni-130594) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:4c:af", ip: ""} in network mk-newest-cni-130594: {Iface:virbr4 ExpiryTime:2024-02-29 20:17:15 +0000 UTC Type:0 Mac:52:54:00:cd:4c:af Iaid: IPaddr:192.168.72.67 Prefix:24 Hostname:newest-cni-130594 Clientid:01:52:54:00:cd:4c:af}
	I0229 19:17:23.886438   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined IP address 192.168.72.67 and MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:23.886651   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHPort
	I0229 19:17:23.886854   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHKeyPath
	I0229 19:17:23.887075   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHKeyPath
	I0229 19:17:23.887202   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHUsername
	I0229 19:17:23.887406   52590 main.go:141] libmachine: Using SSH client type: native
	I0229 19:17:23.887637   52590 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.67 22 <nil> <nil>}
	I0229 19:17:23.887662   52590 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0229 19:17:23.996263   52590 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0229 19:17:23.998200   52590 main.go:141] libmachine: found compatible host: buildroot
	I0229 19:17:23.998210   52590 main.go:141] libmachine: Provisioning with buildroot...
	I0229 19:17:23.998219   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetMachineName
	I0229 19:17:23.998480   52590 buildroot.go:166] provisioning hostname "newest-cni-130594"
	I0229 19:17:23.998501   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetMachineName
	I0229 19:17:23.998717   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHHostname
	I0229 19:17:24.001132   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:24.001566   52590 main.go:141] libmachine: (newest-cni-130594) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:4c:af", ip: ""} in network mk-newest-cni-130594: {Iface:virbr4 ExpiryTime:2024-02-29 20:17:15 +0000 UTC Type:0 Mac:52:54:00:cd:4c:af Iaid: IPaddr:192.168.72.67 Prefix:24 Hostname:newest-cni-130594 Clientid:01:52:54:00:cd:4c:af}
	I0229 19:17:24.001591   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined IP address 192.168.72.67 and MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:24.001712   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHPort
	I0229 19:17:24.001897   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHKeyPath
	I0229 19:17:24.002078   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHKeyPath
	I0229 19:17:24.002242   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHUsername
	I0229 19:17:24.002443   52590 main.go:141] libmachine: Using SSH client type: native
	I0229 19:17:24.002641   52590 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.67 22 <nil> <nil>}
	I0229 19:17:24.002656   52590 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-130594 && echo "newest-cni-130594" | sudo tee /etc/hostname
	I0229 19:17:24.121880   52590 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-130594
	
	I0229 19:17:24.121909   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHHostname
	I0229 19:17:24.124625   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:24.124996   52590 main.go:141] libmachine: (newest-cni-130594) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:4c:af", ip: ""} in network mk-newest-cni-130594: {Iface:virbr4 ExpiryTime:2024-02-29 20:17:15 +0000 UTC Type:0 Mac:52:54:00:cd:4c:af Iaid: IPaddr:192.168.72.67 Prefix:24 Hostname:newest-cni-130594 Clientid:01:52:54:00:cd:4c:af}
	I0229 19:17:24.125026   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined IP address 192.168.72.67 and MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:24.125195   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHPort
	I0229 19:17:24.125396   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHKeyPath
	I0229 19:17:24.125566   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHKeyPath
	I0229 19:17:24.125674   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHUsername
	I0229 19:17:24.125851   52590 main.go:141] libmachine: Using SSH client type: native
	I0229 19:17:24.126050   52590 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.67 22 <nil> <nil>}
	I0229 19:17:24.126067   52590 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-130594' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-130594/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-130594' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 19:17:24.246036   52590 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 19:17:24.246072   52590 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18259-6428/.minikube CaCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18259-6428/.minikube}
	I0229 19:17:24.246102   52590 buildroot.go:174] setting up certificates
	I0229 19:17:24.246113   52590 provision.go:83] configureAuth start
	I0229 19:17:24.246129   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetMachineName
	I0229 19:17:24.246397   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetIP
	I0229 19:17:24.249064   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:24.249381   52590 main.go:141] libmachine: (newest-cni-130594) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:4c:af", ip: ""} in network mk-newest-cni-130594: {Iface:virbr4 ExpiryTime:2024-02-29 20:17:15 +0000 UTC Type:0 Mac:52:54:00:cd:4c:af Iaid: IPaddr:192.168.72.67 Prefix:24 Hostname:newest-cni-130594 Clientid:01:52:54:00:cd:4c:af}
	I0229 19:17:24.249420   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined IP address 192.168.72.67 and MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:24.249577   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHHostname
	I0229 19:17:24.251720   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:24.252187   52590 main.go:141] libmachine: (newest-cni-130594) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:4c:af", ip: ""} in network mk-newest-cni-130594: {Iface:virbr4 ExpiryTime:2024-02-29 20:17:15 +0000 UTC Type:0 Mac:52:54:00:cd:4c:af Iaid: IPaddr:192.168.72.67 Prefix:24 Hostname:newest-cni-130594 Clientid:01:52:54:00:cd:4c:af}
	I0229 19:17:24.252214   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined IP address 192.168.72.67 and MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:24.252351   52590 provision.go:138] copyHostCerts
	I0229 19:17:24.252395   52590 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem, removing ...
	I0229 19:17:24.252411   52590 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem
	I0229 19:17:24.252484   52590 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/ca.pem (1082 bytes)
	I0229 19:17:24.252564   52590 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem, removing ...
	I0229 19:17:24.252572   52590 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem
	I0229 19:17:24.252598   52590 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/cert.pem (1123 bytes)
	I0229 19:17:24.252646   52590 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem, removing ...
	I0229 19:17:24.252653   52590 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem
	I0229 19:17:24.252674   52590 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18259-6428/.minikube/key.pem (1675 bytes)
	I0229 19:17:24.252712   52590 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem org=jenkins.newest-cni-130594 san=[192.168.72.67 192.168.72.67 localhost 127.0.0.1 minikube newest-cni-130594]
	I0229 19:17:24.380606   52590 provision.go:172] copyRemoteCerts
	I0229 19:17:24.380658   52590 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 19:17:24.380680   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHHostname
	I0229 19:17:24.383548   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:24.383968   52590 main.go:141] libmachine: (newest-cni-130594) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:4c:af", ip: ""} in network mk-newest-cni-130594: {Iface:virbr4 ExpiryTime:2024-02-29 20:17:15 +0000 UTC Type:0 Mac:52:54:00:cd:4c:af Iaid: IPaddr:192.168.72.67 Prefix:24 Hostname:newest-cni-130594 Clientid:01:52:54:00:cd:4c:af}
	I0229 19:17:24.383994   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined IP address 192.168.72.67 and MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:24.384207   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHPort
	I0229 19:17:24.384405   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHKeyPath
	I0229 19:17:24.384580   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHUsername
	I0229 19:17:24.384722   52590 sshutil.go:53] new ssh client: &{IP:192.168.72.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/newest-cni-130594/id_rsa Username:docker}
	I0229 19:17:24.471294   52590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 19:17:24.500180   52590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0229 19:17:24.528971   52590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 19:17:24.555974   52590 provision.go:86] duration metric: configureAuth took 309.845099ms
	I0229 19:17:24.555999   52590 buildroot.go:189] setting minikube options for container-runtime
	I0229 19:17:24.556219   52590 config.go:182] Loaded profile config "newest-cni-130594": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0229 19:17:24.556409   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHHostname
	I0229 19:17:24.559791   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:24.560242   52590 main.go:141] libmachine: (newest-cni-130594) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:4c:af", ip: ""} in network mk-newest-cni-130594: {Iface:virbr4 ExpiryTime:2024-02-29 20:17:15 +0000 UTC Type:0 Mac:52:54:00:cd:4c:af Iaid: IPaddr:192.168.72.67 Prefix:24 Hostname:newest-cni-130594 Clientid:01:52:54:00:cd:4c:af}
	I0229 19:17:24.560269   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined IP address 192.168.72.67 and MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:24.560440   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHPort
	I0229 19:17:24.560631   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHKeyPath
	I0229 19:17:24.560827   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHKeyPath
	I0229 19:17:24.560978   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHUsername
	I0229 19:17:24.561108   52590 main.go:141] libmachine: Using SSH client type: native
	I0229 19:17:24.561257   52590 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.67 22 <nil> <nil>}
	I0229 19:17:24.561271   52590 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0229 19:17:24.853732   52590 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0229 19:17:24.853768   52590 main.go:141] libmachine: Checking connection to Docker...
	I0229 19:17:24.853782   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetURL
	I0229 19:17:24.855095   52590 main.go:141] libmachine: (newest-cni-130594) DBG | Using libvirt version 6000000
	I0229 19:17:24.857669   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:24.858053   52590 main.go:141] libmachine: (newest-cni-130594) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:4c:af", ip: ""} in network mk-newest-cni-130594: {Iface:virbr4 ExpiryTime:2024-02-29 20:17:15 +0000 UTC Type:0 Mac:52:54:00:cd:4c:af Iaid: IPaddr:192.168.72.67 Prefix:24 Hostname:newest-cni-130594 Clientid:01:52:54:00:cd:4c:af}
	I0229 19:17:24.858080   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined IP address 192.168.72.67 and MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:24.858218   52590 main.go:141] libmachine: Docker is up and running!
	I0229 19:17:24.858233   52590 main.go:141] libmachine: Reticulating splines...
	I0229 19:17:24.858241   52590 client.go:171] LocalClient.Create took 25.765633903s
	I0229 19:17:24.858264   52590 start.go:167] duration metric: libmachine.API.Create for "newest-cni-130594" took 25.765710381s
	I0229 19:17:24.858277   52590 start.go:300] post-start starting for "newest-cni-130594" (driver="kvm2")
	I0229 19:17:24.858305   52590 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 19:17:24.858323   52590 main.go:141] libmachine: (newest-cni-130594) Calling .DriverName
	I0229 19:17:24.858549   52590 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 19:17:24.858575   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHHostname
	I0229 19:17:24.860883   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:24.861178   52590 main.go:141] libmachine: (newest-cni-130594) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:4c:af", ip: ""} in network mk-newest-cni-130594: {Iface:virbr4 ExpiryTime:2024-02-29 20:17:15 +0000 UTC Type:0 Mac:52:54:00:cd:4c:af Iaid: IPaddr:192.168.72.67 Prefix:24 Hostname:newest-cni-130594 Clientid:01:52:54:00:cd:4c:af}
	I0229 19:17:24.861207   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined IP address 192.168.72.67 and MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:24.861311   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHPort
	I0229 19:17:24.861508   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHKeyPath
	I0229 19:17:24.861650   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHUsername
	I0229 19:17:24.861769   52590 sshutil.go:53] new ssh client: &{IP:192.168.72.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/newest-cni-130594/id_rsa Username:docker}
	I0229 19:17:24.947737   52590 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 19:17:24.953000   52590 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 19:17:24.953024   52590 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6428/.minikube/addons for local assets ...
	I0229 19:17:24.953083   52590 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6428/.minikube/files for local assets ...
	I0229 19:17:24.953170   52590 filesync.go:149] local asset: /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem -> 136512.pem in /etc/ssl/certs
	I0229 19:17:24.953307   52590 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 19:17:24.964750   52590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem --> /etc/ssl/certs/136512.pem (1708 bytes)
	I0229 19:17:24.992425   52590 start.go:303] post-start completed in 134.13702ms
	I0229 19:17:24.992470   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetConfigRaw
	I0229 19:17:24.993035   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetIP
	I0229 19:17:24.995610   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:24.996085   52590 main.go:141] libmachine: (newest-cni-130594) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:4c:af", ip: ""} in network mk-newest-cni-130594: {Iface:virbr4 ExpiryTime:2024-02-29 20:17:15 +0000 UTC Type:0 Mac:52:54:00:cd:4c:af Iaid: IPaddr:192.168.72.67 Prefix:24 Hostname:newest-cni-130594 Clientid:01:52:54:00:cd:4c:af}
	I0229 19:17:24.996129   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined IP address 192.168.72.67 and MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:24.996348   52590 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/newest-cni-130594/config.json ...
	I0229 19:17:24.996508   52590 start.go:128] duration metric: createHost completed in 25.922884482s
	I0229 19:17:24.996529   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHHostname
	I0229 19:17:24.998597   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:24.998887   52590 main.go:141] libmachine: (newest-cni-130594) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:4c:af", ip: ""} in network mk-newest-cni-130594: {Iface:virbr4 ExpiryTime:2024-02-29 20:17:15 +0000 UTC Type:0 Mac:52:54:00:cd:4c:af Iaid: IPaddr:192.168.72.67 Prefix:24 Hostname:newest-cni-130594 Clientid:01:52:54:00:cd:4c:af}
	I0229 19:17:24.998913   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined IP address 192.168.72.67 and MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:24.999018   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHPort
	I0229 19:17:24.999198   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHKeyPath
	I0229 19:17:24.999370   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHKeyPath
	I0229 19:17:24.999500   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHUsername
	I0229 19:17:24.999692   52590 main.go:141] libmachine: Using SSH client type: native
	I0229 19:17:24.999891   52590 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.67 22 <nil> <nil>}
	I0229 19:17:24.999903   52590 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 19:17:25.104443   52590 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709234245.065729189
	
	I0229 19:17:25.104464   52590 fix.go:206] guest clock: 1709234245.065729189
	I0229 19:17:25.104471   52590 fix.go:219] Guest: 2024-02-29 19:17:25.065729189 +0000 UTC Remote: 2024-02-29 19:17:24.996518377 +0000 UTC m=+26.051925571 (delta=69.210812ms)
	I0229 19:17:25.104489   52590 fix.go:190] guest clock delta is within tolerance: 69.210812ms
	I0229 19:17:25.104493   52590 start.go:83] releasing machines lock for "newest-cni-130594", held for 26.030952225s
	I0229 19:17:25.104512   52590 main.go:141] libmachine: (newest-cni-130594) Calling .DriverName
	I0229 19:17:25.104764   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetIP
	I0229 19:17:25.107166   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:25.107475   52590 main.go:141] libmachine: (newest-cni-130594) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:4c:af", ip: ""} in network mk-newest-cni-130594: {Iface:virbr4 ExpiryTime:2024-02-29 20:17:15 +0000 UTC Type:0 Mac:52:54:00:cd:4c:af Iaid: IPaddr:192.168.72.67 Prefix:24 Hostname:newest-cni-130594 Clientid:01:52:54:00:cd:4c:af}
	I0229 19:17:25.107495   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined IP address 192.168.72.67 and MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:25.107624   52590 main.go:141] libmachine: (newest-cni-130594) Calling .DriverName
	I0229 19:17:25.108094   52590 main.go:141] libmachine: (newest-cni-130594) Calling .DriverName
	I0229 19:17:25.108247   52590 main.go:141] libmachine: (newest-cni-130594) Calling .DriverName
	I0229 19:17:25.108332   52590 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 19:17:25.108388   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHHostname
	I0229 19:17:25.108429   52590 ssh_runner.go:195] Run: cat /version.json
	I0229 19:17:25.108469   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHHostname
	I0229 19:17:25.111075   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:25.111358   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:25.111507   52590 main.go:141] libmachine: (newest-cni-130594) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:4c:af", ip: ""} in network mk-newest-cni-130594: {Iface:virbr4 ExpiryTime:2024-02-29 20:17:15 +0000 UTC Type:0 Mac:52:54:00:cd:4c:af Iaid: IPaddr:192.168.72.67 Prefix:24 Hostname:newest-cni-130594 Clientid:01:52:54:00:cd:4c:af}
	I0229 19:17:25.111532   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined IP address 192.168.72.67 and MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:25.111668   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHPort
	I0229 19:17:25.111771   52590 main.go:141] libmachine: (newest-cni-130594) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:4c:af", ip: ""} in network mk-newest-cni-130594: {Iface:virbr4 ExpiryTime:2024-02-29 20:17:15 +0000 UTC Type:0 Mac:52:54:00:cd:4c:af Iaid: IPaddr:192.168.72.67 Prefix:24 Hostname:newest-cni-130594 Clientid:01:52:54:00:cd:4c:af}
	I0229 19:17:25.111798   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined IP address 192.168.72.67 and MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:25.111825   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHKeyPath
	I0229 19:17:25.111918   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHPort
	I0229 19:17:25.111992   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHUsername
	I0229 19:17:25.112089   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHKeyPath
	I0229 19:17:25.112116   52590 sshutil.go:53] new ssh client: &{IP:192.168.72.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/newest-cni-130594/id_rsa Username:docker}
	I0229 19:17:25.112223   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetSSHUsername
	I0229 19:17:25.112340   52590 sshutil.go:53] new ssh client: &{IP:192.168.72.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/newest-cni-130594/id_rsa Username:docker}
	I0229 19:17:25.188094   52590 ssh_runner.go:195] Run: systemctl --version
	I0229 19:17:25.215120   52590 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0229 19:17:25.379266   52590 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 19:17:25.386163   52590 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 19:17:25.386239   52590 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 19:17:25.405980   52590 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 19:17:25.405997   52590 start.go:475] detecting cgroup driver to use...
	I0229 19:17:25.406056   52590 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 19:17:25.423701   52590 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 19:17:25.439286   52590 docker.go:217] disabling cri-docker service (if available) ...
	I0229 19:17:25.439343   52590 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 19:17:25.454154   52590 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 19:17:25.472204   52590 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 19:17:25.599774   52590 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 19:17:25.783605   52590 docker.go:233] disabling docker service ...
	I0229 19:17:25.783662   52590 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 19:17:25.800927   52590 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 19:17:25.817576   52590 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 19:17:25.961885   52590 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 19:17:26.091696   52590 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 19:17:26.108268   52590 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 19:17:26.128845   52590 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0229 19:17:26.128913   52590 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 19:17:26.139802   52590 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0229 19:17:26.139851   52590 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 19:17:26.150834   52590 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 19:17:26.161954   52590 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 19:17:26.172783   52590 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 19:17:26.184091   52590 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 19:17:26.194645   52590 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 19:17:26.194691   52590 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0229 19:17:26.209767   52590 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 19:17:26.220316   52590 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 19:17:26.369106   52590 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0229 19:17:26.528922   52590 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0229 19:17:26.529007   52590 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0229 19:17:26.534202   52590 start.go:543] Will wait 60s for crictl version
	I0229 19:17:26.534260   52590 ssh_runner.go:195] Run: which crictl
	I0229 19:17:26.538713   52590 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 19:17:26.577846   52590 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0229 19:17:26.577912   52590 ssh_runner.go:195] Run: crio --version
	I0229 19:17:26.610234   52590 ssh_runner.go:195] Run: crio --version
	I0229 19:17:26.648536   52590 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0229 19:17:26.649909   52590 main.go:141] libmachine: (newest-cni-130594) Calling .GetIP
	I0229 19:17:26.652522   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:26.652911   52590 main.go:141] libmachine: (newest-cni-130594) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:4c:af", ip: ""} in network mk-newest-cni-130594: {Iface:virbr4 ExpiryTime:2024-02-29 20:17:15 +0000 UTC Type:0 Mac:52:54:00:cd:4c:af Iaid: IPaddr:192.168.72.67 Prefix:24 Hostname:newest-cni-130594 Clientid:01:52:54:00:cd:4c:af}
	I0229 19:17:26.652938   52590 main.go:141] libmachine: (newest-cni-130594) DBG | domain newest-cni-130594 has defined IP address 192.168.72.67 and MAC address 52:54:00:cd:4c:af in network mk-newest-cni-130594
	I0229 19:17:26.653119   52590 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0229 19:17:26.657984   52590 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 19:17:26.673711   52590 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0229 19:17:26.674983   52590 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0229 19:17:26.675103   52590 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 19:17:26.713136   52590 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0229 19:17:26.713204   52590 ssh_runner.go:195] Run: which lz4
	I0229 19:17:26.718227   52590 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0229 19:17:26.723311   52590 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 19:17:26.723337   52590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (401853962 bytes)
	I0229 19:17:28.411193   52590 crio.go:444] Took 1.692990 seconds to copy over tarball
	I0229 19:17:28.411289   52590 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 19:17:31.061912   52590 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.650595155s)
	I0229 19:17:31.061938   52590 crio.go:451] Took 2.650720 seconds to extract the tarball
	I0229 19:17:31.061949   52590 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 19:17:31.104722   52590 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 19:17:31.154179   52590 crio.go:496] all images are preloaded for cri-o runtime.
	I0229 19:17:31.154200   52590 cache_images.go:84] Images are preloaded, skipping loading
	I0229 19:17:31.154263   52590 ssh_runner.go:195] Run: crio config
	I0229 19:17:31.208130   52590 cni.go:84] Creating CNI manager for ""
	I0229 19:17:31.208151   52590 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 19:17:31.208172   52590 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I0229 19:17:31.208189   52590 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.72.67 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-130594 NodeName:newest-cni-130594 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.67"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs
:map[] NodeIP:192.168.72.67 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 19:17:31.208319   52590 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.67
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-130594"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.67
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.67"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 19:17:31.208386   52590 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-130594 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.67
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-130594 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 19:17:31.208438   52590 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0229 19:17:31.219617   52590 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 19:17:31.219700   52590 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 19:17:31.230716   52590 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (418 bytes)
	I0229 19:17:31.250025   52590 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0229 19:17:31.269406   52590 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2230 bytes)
	I0229 19:17:31.289223   52590 ssh_runner.go:195] Run: grep 192.168.72.67	control-plane.minikube.internal$ /etc/hosts
	I0229 19:17:31.294155   52590 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.67	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 19:17:31.308415   52590 certs.go:56] Setting up /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/newest-cni-130594 for IP: 192.168.72.67
	I0229 19:17:31.308465   52590 certs.go:190] acquiring lock for shared ca certs: {Name:mk3505d2468da66ac20dc3ebf913782cecf1ba0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:17:31.308594   52590 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18259-6428/.minikube/ca.key
	I0229 19:17:31.308644   52590 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.key
	I0229 19:17:31.308682   52590 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/newest-cni-130594/client.key
	I0229 19:17:31.308694   52590 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/newest-cni-130594/client.crt with IP's: []
	I0229 19:17:31.602911   52590 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/newest-cni-130594/client.crt ...
	I0229 19:17:31.602954   52590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/newest-cni-130594/client.crt: {Name:mk84b0372b7eeab5506ba924c29e59fb1d3a98c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:17:31.603165   52590 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/newest-cni-130594/client.key ...
	I0229 19:17:31.603183   52590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/newest-cni-130594/client.key: {Name:mkea0ea101041d6e8d1d0994ce0ee3a3930c1c37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:17:31.603306   52590 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/newest-cni-130594/apiserver.key.1a1e1c5a
	I0229 19:17:31.603325   52590 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/newest-cni-130594/apiserver.crt.1a1e1c5a with IP's: [192.168.72.67 10.96.0.1 127.0.0.1 10.0.0.1]
	I0229 19:17:31.822178   52590 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/newest-cni-130594/apiserver.crt.1a1e1c5a ...
	I0229 19:17:31.822207   52590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/newest-cni-130594/apiserver.crt.1a1e1c5a: {Name:mkf89468c1c794c72942ee93be8239055b42f705 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:17:31.822351   52590 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/newest-cni-130594/apiserver.key.1a1e1c5a ...
	I0229 19:17:31.822367   52590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/newest-cni-130594/apiserver.key.1a1e1c5a: {Name:mkedc3a76aee512e961669f55be5eb52f5cd67a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:17:31.822459   52590 certs.go:337] copying /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/newest-cni-130594/apiserver.crt.1a1e1c5a -> /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/newest-cni-130594/apiserver.crt
	I0229 19:17:31.822559   52590 certs.go:341] copying /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/newest-cni-130594/apiserver.key.1a1e1c5a -> /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/newest-cni-130594/apiserver.key
	I0229 19:17:31.822635   52590 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/newest-cni-130594/proxy-client.key
	I0229 19:17:31.822654   52590 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/newest-cni-130594/proxy-client.crt with IP's: []
	I0229 19:17:32.071836   52590 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/newest-cni-130594/proxy-client.crt ...
	I0229 19:17:32.071865   52590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/newest-cni-130594/proxy-client.crt: {Name:mk88223d09842bb710f0c20c4698f8412e7438e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:17:32.072022   52590 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/newest-cni-130594/proxy-client.key ...
	I0229 19:17:32.072039   52590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/newest-cni-130594/proxy-client.key: {Name:mk1864f6af6a1518f2495e532f7d21e72a6b853b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:17:32.072194   52590 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651.pem (1338 bytes)
	W0229 19:17:32.072235   52590 certs.go:433] ignoring /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651_empty.pem, impossibly tiny 0 bytes
	I0229 19:17:32.072245   52590 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca-key.pem (1675 bytes)
	I0229 19:17:32.072272   52590 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/ca.pem (1082 bytes)
	I0229 19:17:32.072298   52590 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/cert.pem (1123 bytes)
	I0229 19:17:32.072324   52590 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/certs/home/jenkins/minikube-integration/18259-6428/.minikube/certs/key.pem (1675 bytes)
	I0229 19:17:32.072362   52590 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem (1708 bytes)
	I0229 19:17:32.072907   52590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/newest-cni-130594/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 19:17:32.108594   52590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/newest-cni-130594/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0229 19:17:32.139943   52590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/newest-cni-130594/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 19:17:32.170446   52590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/newest-cni-130594/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 19:17:32.199534   52590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 19:17:32.228261   52590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0229 19:17:32.258155   52590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 19:17:32.286505   52590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0229 19:17:32.315425   52590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/ssl/certs/136512.pem --> /usr/share/ca-certificates/136512.pem (1708 bytes)
	I0229 19:17:32.342585   52590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 19:17:32.370286   52590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6428/.minikube/certs/13651.pem --> /usr/share/ca-certificates/13651.pem (1338 bytes)
	I0229 19:17:32.397758   52590 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 19:17:32.417755   52590 ssh_runner.go:195] Run: openssl version
	I0229 19:17:32.424572   52590 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136512.pem && ln -fs /usr/share/ca-certificates/136512.pem /etc/ssl/certs/136512.pem"
	I0229 19:17:32.437075   52590 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136512.pem
	I0229 19:17:32.444117   52590 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 17:49 /usr/share/ca-certificates/136512.pem
	I0229 19:17:32.444185   52590 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136512.pem
	I0229 19:17:32.451416   52590 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136512.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 19:17:32.464614   52590 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 19:17:32.477240   52590 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 19:17:32.483092   52590 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 17:40 /usr/share/ca-certificates/minikubeCA.pem
	I0229 19:17:32.483153   52590 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 19:17:32.490050   52590 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 19:17:32.503604   52590 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13651.pem && ln -fs /usr/share/ca-certificates/13651.pem /etc/ssl/certs/13651.pem"
	I0229 19:17:32.516764   52590 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13651.pem
	I0229 19:17:32.522614   52590 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 17:49 /usr/share/ca-certificates/13651.pem
	I0229 19:17:32.522669   52590 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13651.pem
	I0229 19:17:32.529410   52590 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13651.pem /etc/ssl/certs/51391683.0"
	I0229 19:17:32.541829   52590 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 19:17:32.547363   52590 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 19:17:32.547416   52590 kubeadm.go:404] StartCluster: {Name:newest-cni-130594 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:newest-cni-130594 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.67 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenk
ins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 19:17:32.547519   52590 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0229 19:17:32.547600   52590 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 19:17:32.595281   52590 cri.go:89] found id: ""
	I0229 19:17:32.595365   52590 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 19:17:32.606230   52590 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 19:17:32.616665   52590 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 19:17:32.627487   52590 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 19:17:32.627534   52590 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0229 19:17:32.810408   52590 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.2
	I0229 19:17:32.810486   52590 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 19:17:33.066979   52590 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 19:17:33.067098   52590 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 19:17:33.067220   52590 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 19:17:33.335672   52590 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 19:17:33.569684   52590 out.go:204]   - Generating certificates and keys ...
	I0229 19:17:33.569829   52590 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 19:17:33.569912   52590 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 19:17:33.570006   52590 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0229 19:17:33.670893   52590 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0229 19:17:33.728390   52590 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0229 19:17:34.089834   52590 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0229 19:17:34.212307   52590 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0229 19:17:34.212514   52590 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-130594] and IPs [192.168.72.67 127.0.0.1 ::1]
	I0229 19:17:34.592656   52590 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0229 19:17:34.592836   52590 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-130594] and IPs [192.168.72.67 127.0.0.1 ::1]
	I0229 19:17:34.713793   52590 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0229 19:17:34.898402   52590 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0229 19:17:35.116855   52590 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0229 19:17:35.117167   52590 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 19:17:35.191797   52590 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 19:17:35.554099   52590 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0229 19:17:35.718092   52590 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 19:17:35.845917   52590 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 19:17:35.945043   52590 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 19:17:35.945887   52590 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 19:17:35.953127   52590 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 19:17:35.955074   52590 out.go:204]   - Booting up control plane ...
	I0229 19:17:35.955185   52590 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 19:17:35.955296   52590 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 19:17:35.955538   52590 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 19:17:35.985211   52590 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 19:17:35.985340   52590 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 19:17:35.985403   52590 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0229 19:17:36.162557   52590 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 19:17:42.662374   52590 kubeadm.go:322] [apiclient] All control plane components are healthy after 6.504377 seconds
	I0229 19:17:42.680602   52590 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0229 19:17:42.714817   52590 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0229 19:17:43.266428   52590 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0229 19:17:43.266657   52590 kubeadm.go:322] [mark-control-plane] Marking the node newest-cni-130594 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0229 19:17:43.782900   52590 kubeadm.go:322] [bootstrap-token] Using token: 8ad6nq.py0619mwpwkyhtz2
	I0229 19:17:43.784307   52590 out.go:204]   - Configuring RBAC rules ...
	I0229 19:17:43.784466   52590 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0229 19:17:43.790916   52590 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0229 19:17:43.799466   52590 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0229 19:17:43.809246   52590 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0229 19:17:43.813916   52590 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0229 19:17:43.818072   52590 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0229 19:17:43.840341   52590 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0229 19:17:44.113097   52590 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0229 19:17:44.231575   52590 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0229 19:17:44.231598   52590 kubeadm.go:322] 
	I0229 19:17:44.231666   52590 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0229 19:17:44.231679   52590 kubeadm.go:322] 
	I0229 19:17:44.231758   52590 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0229 19:17:44.231791   52590 kubeadm.go:322] 
	I0229 19:17:44.231838   52590 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0229 19:17:44.231957   52590 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0229 19:17:44.232024   52590 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0229 19:17:44.232042   52590 kubeadm.go:322] 
	I0229 19:17:44.232151   52590 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0229 19:17:44.232162   52590 kubeadm.go:322] 
	I0229 19:17:44.232226   52590 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0229 19:17:44.232244   52590 kubeadm.go:322] 
	I0229 19:17:44.232336   52590 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0229 19:17:44.232447   52590 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0229 19:17:44.232557   52590 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0229 19:17:44.232568   52590 kubeadm.go:322] 
	I0229 19:17:44.232673   52590 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0229 19:17:44.232780   52590 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0229 19:17:44.232793   52590 kubeadm.go:322] 
	I0229 19:17:44.232913   52590 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 8ad6nq.py0619mwpwkyhtz2 \
	I0229 19:17:44.233072   52590 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:aeecb2c9da879b7c90530698bac219eb9b0e3de8de59b6b65ba867f984e81782 \
	I0229 19:17:44.233136   52590 kubeadm.go:322] 	--control-plane 
	I0229 19:17:44.233151   52590 kubeadm.go:322] 
	I0229 19:17:44.233264   52590 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0229 19:17:44.233275   52590 kubeadm.go:322] 
	I0229 19:17:44.233386   52590 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 8ad6nq.py0619mwpwkyhtz2 \
	I0229 19:17:44.233537   52590 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:aeecb2c9da879b7c90530698bac219eb9b0e3de8de59b6b65ba867f984e81782 
	I0229 19:17:44.233681   52590 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 19:17:44.233701   52590 cni.go:84] Creating CNI manager for ""
	I0229 19:17:44.233710   52590 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 19:17:44.236032   52590 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	
	
	==> CRI-O <==
	Feb 29 19:17:49 no-preload-247197 crio[684]: time="2024-02-29 19:17:49.362276686Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=77718be8-de01-4a35-b1f1-e75af7f2c1d3 name=/runtime.v1.RuntimeService/Version
	Feb 29 19:17:49 no-preload-247197 crio[684]: time="2024-02-29 19:17:49.363769892Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cc4d41f8-ca3d-4e51-8138-c38ac3b9ec0a name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 19:17:49 no-preload-247197 crio[684]: time="2024-02-29 19:17:49.364221822Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709234269364102526,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cc4d41f8-ca3d-4e51-8138-c38ac3b9ec0a name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 19:17:49 no-preload-247197 crio[684]: time="2024-02-29 19:17:49.365080861Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=69e9421a-b468-4fd2-92fe-53946d8e0b55 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:17:49 no-preload-247197 crio[684]: time="2024-02-29 19:17:49.365206524Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=69e9421a-b468-4fd2-92fe-53946d8e0b55 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:17:49 no-preload-247197 crio[684]: time="2024-02-29 19:17:49.365442385Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c77d304aa104bed3e19645c4702145c26a9b0c8580d21f085e5eac06fa73ca2c,PodSandboxId:a493ebfe62c8ec01fd4c76ae3fb789ffae4c37ddb97b686119fe01ea3abff20c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709233462425719553,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c361786-e6d8-4cb4-81c3-387677a3bb05,},Annotations:map[string]string{io.kubernetes.container.hash: 9d9afd6b,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8cab5559badab1abf69f6d38522d0231303c529f2b78b96d073ce6f1117da43,PodSandboxId:2dc918253156be554da561f824424ad09d8e0af9ceca3d16f4bcbd4eef557e3f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1709233461223310872,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-9z6k5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 818ddb56-c41b-4aae-8490-a9559498eecb,},Annotations:map[string]string{io.kubernetes.container.hash: 96f4e418,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecdd7783c174633c8e5cbf63bbf2bd9a803b88800ec269f503d1974990869365,PodSandboxId:d6298b9e924d66a97ceffdbba8111e7432bc19f85d1e0f63841dd025b8138247,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1709233460468477288,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vvkjv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b9
11d8-c127-4008-a279-5f1cac593457,},Annotations:map[string]string{io.kubernetes.container.hash: d5fdfa47,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e058bfecc2b86d04151528b3bc3ea0c29858a4d4c2af94ff5e8cea26dd9438c,PodSandboxId:a207c918f69f118f2237a099f7128018173e85ca31b1243aeb453f9e33f6faf5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1709233440913104037,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-247197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06bfef3935db5118eb5773929f3f215a,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 3da47a01,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:730a369e2636f53e3085f418ca1fe59278c5988e5639018d26088db3e0d29a9a,PodSandboxId:ede772d0d0419d604b23eee81ea143a69419ae9e3445644669e8bf9a9df81475,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1709233440845385957,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-247197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c3cecb6396afec4d5aed6c036a4ee58,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 28b9db08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c68222b7809e6dde0a01964d3dcac0dd1789c074f0b42457a8d52ddf09a616a,PodSandboxId:4a8a310e4612bbff70cf054794a7d34412df456d78d43db98e402e609e1c005f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1709233440878406917,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-247197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48a15f32acd3e29de98b06818f25b3f6,},Annotations:map[string]string{io.k
ubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9661e52ccd784fecb3c7d46bee0dd7f089a63bf8729d5990840d79192dc01b35,PodSandboxId:7f4d2f592e7698bb1b2a38ee674726d145456f279c1be1a52ac173b815632f16,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1709233440881361861,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-247197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d9525e57c83e7fe4adc55cd306f5f1c,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6edf3acff7dee1fec0aa396e614c8d822a2a7e2074e2da8863c3427cda994799,PodSandboxId:d774f6e634f56d9f18ca89a03bbf39a8a32a9b55037fd9e100b52ea2c8eab545,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_EXITED,CreatedAt:1709233146339067379,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-247197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c3cecb6396afec4d5aed6c036a4ee58,},Annotations:map[string]string{io.k
ubernetes.container.hash: 28b9db08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=69e9421a-b468-4fd2-92fe-53946d8e0b55 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:17:49 no-preload-247197 crio[684]: time="2024-02-29 19:17:49.416591739Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e3855cea-bedd-4f9d-8370-5d83ec3e7ece name=/runtime.v1.RuntimeService/Version
	Feb 29 19:17:49 no-preload-247197 crio[684]: time="2024-02-29 19:17:49.416730888Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e3855cea-bedd-4f9d-8370-5d83ec3e7ece name=/runtime.v1.RuntimeService/Version
	Feb 29 19:17:49 no-preload-247197 crio[684]: time="2024-02-29 19:17:49.418651613Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=af639ab6-d709-437d-9935-c52b4511181f name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 19:17:49 no-preload-247197 crio[684]: time="2024-02-29 19:17:49.418976331Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709234269418955840,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=af639ab6-d709-437d-9935-c52b4511181f name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 19:17:49 no-preload-247197 crio[684]: time="2024-02-29 19:17:49.419576024Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a0c8ebea-cbc4-4882-8ee0-909ab6eb9fb5 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:17:49 no-preload-247197 crio[684]: time="2024-02-29 19:17:49.419623961Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a0c8ebea-cbc4-4882-8ee0-909ab6eb9fb5 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:17:49 no-preload-247197 crio[684]: time="2024-02-29 19:17:49.419805129Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c77d304aa104bed3e19645c4702145c26a9b0c8580d21f085e5eac06fa73ca2c,PodSandboxId:a493ebfe62c8ec01fd4c76ae3fb789ffae4c37ddb97b686119fe01ea3abff20c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709233462425719553,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c361786-e6d8-4cb4-81c3-387677a3bb05,},Annotations:map[string]string{io.kubernetes.container.hash: 9d9afd6b,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8cab5559badab1abf69f6d38522d0231303c529f2b78b96d073ce6f1117da43,PodSandboxId:2dc918253156be554da561f824424ad09d8e0af9ceca3d16f4bcbd4eef557e3f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1709233461223310872,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-9z6k5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 818ddb56-c41b-4aae-8490-a9559498eecb,},Annotations:map[string]string{io.kubernetes.container.hash: 96f4e418,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecdd7783c174633c8e5cbf63bbf2bd9a803b88800ec269f503d1974990869365,PodSandboxId:d6298b9e924d66a97ceffdbba8111e7432bc19f85d1e0f63841dd025b8138247,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1709233460468477288,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vvkjv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b9
11d8-c127-4008-a279-5f1cac593457,},Annotations:map[string]string{io.kubernetes.container.hash: d5fdfa47,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e058bfecc2b86d04151528b3bc3ea0c29858a4d4c2af94ff5e8cea26dd9438c,PodSandboxId:a207c918f69f118f2237a099f7128018173e85ca31b1243aeb453f9e33f6faf5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1709233440913104037,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-247197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06bfef3935db5118eb5773929f3f215a,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 3da47a01,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:730a369e2636f53e3085f418ca1fe59278c5988e5639018d26088db3e0d29a9a,PodSandboxId:ede772d0d0419d604b23eee81ea143a69419ae9e3445644669e8bf9a9df81475,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1709233440845385957,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-247197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c3cecb6396afec4d5aed6c036a4ee58,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 28b9db08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c68222b7809e6dde0a01964d3dcac0dd1789c074f0b42457a8d52ddf09a616a,PodSandboxId:4a8a310e4612bbff70cf054794a7d34412df456d78d43db98e402e609e1c005f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1709233440878406917,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-247197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48a15f32acd3e29de98b06818f25b3f6,},Annotations:map[string]string{io.k
ubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9661e52ccd784fecb3c7d46bee0dd7f089a63bf8729d5990840d79192dc01b35,PodSandboxId:7f4d2f592e7698bb1b2a38ee674726d145456f279c1be1a52ac173b815632f16,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1709233440881361861,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-247197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d9525e57c83e7fe4adc55cd306f5f1c,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6edf3acff7dee1fec0aa396e614c8d822a2a7e2074e2da8863c3427cda994799,PodSandboxId:d774f6e634f56d9f18ca89a03bbf39a8a32a9b55037fd9e100b52ea2c8eab545,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_EXITED,CreatedAt:1709233146339067379,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-247197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c3cecb6396afec4d5aed6c036a4ee58,},Annotations:map[string]string{io.k
ubernetes.container.hash: 28b9db08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a0c8ebea-cbc4-4882-8ee0-909ab6eb9fb5 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:17:49 no-preload-247197 crio[684]: time="2024-02-29 19:17:49.428238866Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e16d858b-8aa5-4151-8295-61004c5113ea name=/runtime.v1.RuntimeService/ListPodSandbox
	Feb 29 19:17:49 no-preload-247197 crio[684]: time="2024-02-29 19:17:49.428454469Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:a493ebfe62c8ec01fd4c76ae3fb789ffae4c37ddb97b686119fe01ea3abff20c,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:3c361786-e6d8-4cb4-81c3-387677a3bb05,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1709233462240257303,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c361786-e6d8-4cb4-81c3-387677a3bb05,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-
system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-02-29T19:04:21.931392966Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5affe39c8b9be1caa7c0bbbed75fda4399f39f7888fe42fb4c3e2f0d7e7e9734,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-nj5h7,Uid:c53f2987-829e-4bea-8075-16af3a59249f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1709233462180978705,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-nj5h7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c53f2987-829e-4bea-8075-16af3a59249f
,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-02-29T19:04:21.873886647Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2dc918253156be554da561f824424ad09d8e0af9ceca3d16f4bcbd4eef557e3f,Metadata:&PodSandboxMetadata{Name:coredns-76f75df574-9z6k5,Uid:818ddb56-c41b-4aae-8490-a9559498eecb,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1709233460362220643,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-76f75df574-9z6k5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 818ddb56-c41b-4aae-8490-a9559498eecb,k8s-app: kube-dns,pod-template-hash: 76f75df574,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-02-29T19:04:20.033756347Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d6298b9e924d66a97ceffdbba8111e7432bc19f85d1e0f63841dd025b8138247,Metadata:&PodSandboxMetadata{Name:kube-proxy-vvkjv,Uid:b5b911d8-c127-4008-a279-5f1
cac593457,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1709233459892471976,Labels:map[string]string{controller-revision-hash: 79c5f556d9,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-vvkjv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b911d8-c127-4008-a279-5f1cac593457,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-02-29T19:04:19.569502786Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ede772d0d0419d604b23eee81ea143a69419ae9e3445644669e8bf9a9df81475,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-247197,Uid:8c3cecb6396afec4d5aed6c036a4ee58,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1709233440626081664,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-247197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c3cecb6396afec4d5aed6c036a4
ee58,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.72:8443,kubernetes.io/config.hash: 8c3cecb6396afec4d5aed6c036a4ee58,kubernetes.io/config.seen: 2024-02-29T19:04:00.139635889Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4a8a310e4612bbff70cf054794a7d34412df456d78d43db98e402e609e1c005f,Metadata:&PodSandboxMetadata{Name:kube-scheduler-no-preload-247197,Uid:48a15f32acd3e29de98b06818f25b3f6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1709233440612962500,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-no-preload-247197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48a15f32acd3e29de98b06818f25b3f6,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 48a15f32acd3e29de98b06818f25b3f6,kubernetes.io/config.seen: 2024-02-29T19:04:00.139625975Z,kubernetes.io/config.source: file,
},RuntimeHandler:,},&PodSandbox{Id:7f4d2f592e7698bb1b2a38ee674726d145456f279c1be1a52ac173b815632f16,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-no-preload-247197,Uid:1d9525e57c83e7fe4adc55cd306f5f1c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1709233440612554627,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-no-preload-247197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d9525e57c83e7fe4adc55cd306f5f1c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 1d9525e57c83e7fe4adc55cd306f5f1c,kubernetes.io/config.seen: 2024-02-29T19:04:00.139622341Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a207c918f69f118f2237a099f7128018173e85ca31b1243aeb453f9e33f6faf5,Metadata:&PodSandboxMetadata{Name:etcd-no-preload-247197,Uid:06bfef3935db5118eb5773929f3f215a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:17092334
40610785759,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-no-preload-247197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06bfef3935db5118eb5773929f3f215a,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.72:2379,kubernetes.io/config.hash: 06bfef3935db5118eb5773929f3f215a,kubernetes.io/config.seen: 2024-02-29T19:04:00.139634828Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=e16d858b-8aa5-4151-8295-61004c5113ea name=/runtime.v1.RuntimeService/ListPodSandbox
	Feb 29 19:17:49 no-preload-247197 crio[684]: time="2024-02-29 19:17:49.429336680Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7edcaad1-f9a7-48cf-b58c-3b5fd1bea7ab name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:17:49 no-preload-247197 crio[684]: time="2024-02-29 19:17:49.429443705Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7edcaad1-f9a7-48cf-b58c-3b5fd1bea7ab name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:17:49 no-preload-247197 crio[684]: time="2024-02-29 19:17:49.429715758Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c77d304aa104bed3e19645c4702145c26a9b0c8580d21f085e5eac06fa73ca2c,PodSandboxId:a493ebfe62c8ec01fd4c76ae3fb789ffae4c37ddb97b686119fe01ea3abff20c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709233462425719553,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c361786-e6d8-4cb4-81c3-387677a3bb05,},Annotations:map[string]string{io.kubernetes.container.hash: 9d9afd6b,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8cab5559badab1abf69f6d38522d0231303c529f2b78b96d073ce6f1117da43,PodSandboxId:2dc918253156be554da561f824424ad09d8e0af9ceca3d16f4bcbd4eef557e3f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1709233461223310872,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-9z6k5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 818ddb56-c41b-4aae-8490-a9559498eecb,},Annotations:map[string]string{io.kubernetes.container.hash: 96f4e418,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecdd7783c174633c8e5cbf63bbf2bd9a803b88800ec269f503d1974990869365,PodSandboxId:d6298b9e924d66a97ceffdbba8111e7432bc19f85d1e0f63841dd025b8138247,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1709233460468477288,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vvkjv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b9
11d8-c127-4008-a279-5f1cac593457,},Annotations:map[string]string{io.kubernetes.container.hash: d5fdfa47,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e058bfecc2b86d04151528b3bc3ea0c29858a4d4c2af94ff5e8cea26dd9438c,PodSandboxId:a207c918f69f118f2237a099f7128018173e85ca31b1243aeb453f9e33f6faf5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1709233440913104037,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-247197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06bfef3935db5118eb5773929f3f215a,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 3da47a01,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:730a369e2636f53e3085f418ca1fe59278c5988e5639018d26088db3e0d29a9a,PodSandboxId:ede772d0d0419d604b23eee81ea143a69419ae9e3445644669e8bf9a9df81475,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1709233440845385957,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-247197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c3cecb6396afec4d5aed6c036a4ee58,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 28b9db08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c68222b7809e6dde0a01964d3dcac0dd1789c074f0b42457a8d52ddf09a616a,PodSandboxId:4a8a310e4612bbff70cf054794a7d34412df456d78d43db98e402e609e1c005f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1709233440878406917,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-247197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48a15f32acd3e29de98b06818f25b3f6,},Annotations:map[string]string{io.k
ubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9661e52ccd784fecb3c7d46bee0dd7f089a63bf8729d5990840d79192dc01b35,PodSandboxId:7f4d2f592e7698bb1b2a38ee674726d145456f279c1be1a52ac173b815632f16,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1709233440881361861,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-247197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d9525e57c83e7fe4adc55cd306f5f1c,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7edcaad1-f9a7-48cf-b58c-3b5fd1bea7ab name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:17:49 no-preload-247197 crio[684]: time="2024-02-29 19:17:49.463361283Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3a4ea3f0-75c5-4848-95d3-17265e8fc722 name=/runtime.v1.RuntimeService/Version
	Feb 29 19:17:49 no-preload-247197 crio[684]: time="2024-02-29 19:17:49.463501853Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3a4ea3f0-75c5-4848-95d3-17265e8fc722 name=/runtime.v1.RuntimeService/Version
	Feb 29 19:17:49 no-preload-247197 crio[684]: time="2024-02-29 19:17:49.464668357Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0824beb9-c5bd-4e58-988d-544165d47728 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 19:17:49 no-preload-247197 crio[684]: time="2024-02-29 19:17:49.465002961Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709234269464980128,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0824beb9-c5bd-4e58-988d-544165d47728 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 19:17:49 no-preload-247197 crio[684]: time="2024-02-29 19:17:49.465512554Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=59d57725-2ef2-4e05-8e93-f264ccbd321c name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:17:49 no-preload-247197 crio[684]: time="2024-02-29 19:17:49.465596561Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=59d57725-2ef2-4e05-8e93-f264ccbd321c name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 19:17:49 no-preload-247197 crio[684]: time="2024-02-29 19:17:49.465810621Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c77d304aa104bed3e19645c4702145c26a9b0c8580d21f085e5eac06fa73ca2c,PodSandboxId:a493ebfe62c8ec01fd4c76ae3fb789ffae4c37ddb97b686119fe01ea3abff20c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709233462425719553,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c361786-e6d8-4cb4-81c3-387677a3bb05,},Annotations:map[string]string{io.kubernetes.container.hash: 9d9afd6b,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8cab5559badab1abf69f6d38522d0231303c529f2b78b96d073ce6f1117da43,PodSandboxId:2dc918253156be554da561f824424ad09d8e0af9ceca3d16f4bcbd4eef557e3f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1709233461223310872,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-9z6k5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 818ddb56-c41b-4aae-8490-a9559498eecb,},Annotations:map[string]string{io.kubernetes.container.hash: 96f4e418,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecdd7783c174633c8e5cbf63bbf2bd9a803b88800ec269f503d1974990869365,PodSandboxId:d6298b9e924d66a97ceffdbba8111e7432bc19f85d1e0f63841dd025b8138247,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1709233460468477288,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vvkjv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b9
11d8-c127-4008-a279-5f1cac593457,},Annotations:map[string]string{io.kubernetes.container.hash: d5fdfa47,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e058bfecc2b86d04151528b3bc3ea0c29858a4d4c2af94ff5e8cea26dd9438c,PodSandboxId:a207c918f69f118f2237a099f7128018173e85ca31b1243aeb453f9e33f6faf5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1709233440913104037,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-247197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06bfef3935db5118eb5773929f3f215a,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 3da47a01,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:730a369e2636f53e3085f418ca1fe59278c5988e5639018d26088db3e0d29a9a,PodSandboxId:ede772d0d0419d604b23eee81ea143a69419ae9e3445644669e8bf9a9df81475,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1709233440845385957,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-247197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c3cecb6396afec4d5aed6c036a4ee58,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 28b9db08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c68222b7809e6dde0a01964d3dcac0dd1789c074f0b42457a8d52ddf09a616a,PodSandboxId:4a8a310e4612bbff70cf054794a7d34412df456d78d43db98e402e609e1c005f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1709233440878406917,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-247197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48a15f32acd3e29de98b06818f25b3f6,},Annotations:map[string]string{io.k
ubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9661e52ccd784fecb3c7d46bee0dd7f089a63bf8729d5990840d79192dc01b35,PodSandboxId:7f4d2f592e7698bb1b2a38ee674726d145456f279c1be1a52ac173b815632f16,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1709233440881361861,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-247197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d9525e57c83e7fe4adc55cd306f5f1c,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6edf3acff7dee1fec0aa396e614c8d822a2a7e2074e2da8863c3427cda994799,PodSandboxId:d774f6e634f56d9f18ca89a03bbf39a8a32a9b55037fd9e100b52ea2c8eab545,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_EXITED,CreatedAt:1709233146339067379,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-247197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c3cecb6396afec4d5aed6c036a4ee58,},Annotations:map[string]string{io.k
ubernetes.container.hash: 28b9db08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=59d57725-2ef2-4e05-8e93-f264ccbd321c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c77d304aa104b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 minutes ago      Running             storage-provisioner       0                   a493ebfe62c8e       storage-provisioner
	d8cab5559bada       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   13 minutes ago      Running             coredns                   0                   2dc918253156b       coredns-76f75df574-9z6k5
	ecdd7783c1746       cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834   13 minutes ago      Running             kube-proxy                0                   d6298b9e924d6       kube-proxy-vvkjv
	3e058bfecc2b8       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7   13 minutes ago      Running             etcd                      2                   a207c918f69f1       etcd-no-preload-247197
	9661e52ccd784       d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d   13 minutes ago      Running             kube-controller-manager   2                   7f4d2f592e769       kube-controller-manager-no-preload-247197
	2c68222b7809e       4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210   13 minutes ago      Running             kube-scheduler            2                   4a8a310e4612b       kube-scheduler-no-preload-247197
	730a369e2636f       bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f   13 minutes ago      Running             kube-apiserver            2                   ede772d0d0419       kube-apiserver-no-preload-247197
	6edf3acff7dee       bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f   18 minutes ago      Exited              kube-apiserver            1                   d774f6e634f56       kube-apiserver-no-preload-247197
	
	
	==> coredns [d8cab5559badab1abf69f6d38522d0231303c529f2b78b96d073ce6f1117da43] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	[INFO] Reloading complete
	[INFO] 127.0.0.1:59397 - 54629 "HINFO IN 2086180611971448474.6684178754634217295. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023047441s
	
	
	==> describe nodes <==
	Name:               no-preload-247197
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-247197
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f9e09574fdbf0719156a1e892f7aeb8b71f0cf19
	                    minikube.k8s.io/name=no-preload-247197
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_29T19_04_07_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Feb 2024 19:04:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-247197
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Feb 2024 19:17:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Feb 2024 19:14:41 +0000   Thu, 29 Feb 2024 19:04:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Feb 2024 19:14:41 +0000   Thu, 29 Feb 2024 19:04:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Feb 2024 19:14:41 +0000   Thu, 29 Feb 2024 19:04:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Feb 2024 19:14:41 +0000   Thu, 29 Feb 2024 19:04:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.72
	  Hostname:    no-preload-247197
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 e2650de0b91e48329c17e27b361311ab
	  System UUID:                e2650de0-b91e-4832-9c17-e27b361311ab
	  Boot ID:                    ffdc0861-0276-4e84-a23a-5d1542d1375a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-9z6k5                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-no-preload-247197                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kube-apiserver-no-preload-247197             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-no-preload-247197    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-vvkjv                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-no-preload-247197             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 metrics-server-57f55c9bc5-nj5h7              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node no-preload-247197 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node no-preload-247197 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node no-preload-247197 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m                kubelet          Node no-preload-247197 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m                kubelet          Node no-preload-247197 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m                kubelet          Node no-preload-247197 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m                node-controller  Node no-preload-247197 event: Registered Node no-preload-247197 in Controller
	
	
	==> dmesg <==
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.060403] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.046617] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.796155] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.350287] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.714204] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +11.067185] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.059521] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067629] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.198604] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.116044] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.251607] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[ +17.623821] kauditd_printk_skb: 130 callbacks suppressed
	[Feb29 18:59] systemd-fstab-generator[1299]: Ignoring "noauto" option for root device
	[  +5.774117] kauditd_printk_skb: 63 callbacks suppressed
	[  +6.723278] kauditd_printk_skb: 69 callbacks suppressed
	[Feb29 19:03] kauditd_printk_skb: 5 callbacks suppressed
	[  +1.332065] systemd-fstab-generator[3764]: Ignoring "noauto" option for root device
	[Feb29 19:04] kauditd_printk_skb: 54 callbacks suppressed
	[  +2.807135] systemd-fstab-generator[4088]: Ignoring "noauto" option for root device
	[ +13.223909] kauditd_printk_skb: 14 callbacks suppressed
	[  +6.530656] kauditd_printk_skb: 56 callbacks suppressed
	
	
	==> etcd [3e058bfecc2b86d04151528b3bc3ea0c29858a4d4c2af94ff5e8cea26dd9438c] <==
	{"level":"info","ts":"2024-02-29T19:04:02.179667Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"87349ef525ad2fc2 became candidate at term 2"}
	{"level":"info","ts":"2024-02-29T19:04:02.179672Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"87349ef525ad2fc2 received MsgVoteResp from 87349ef525ad2fc2 at term 2"}
	{"level":"info","ts":"2024-02-29T19:04:02.17969Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"87349ef525ad2fc2 became leader at term 2"}
	{"level":"info","ts":"2024-02-29T19:04:02.179702Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 87349ef525ad2fc2 elected leader 87349ef525ad2fc2 at term 2"}
	{"level":"info","ts":"2024-02-29T19:04:02.181422Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"87349ef525ad2fc2","local-member-attributes":"{Name:no-preload-247197 ClientURLs:[https://192.168.50.72:2379]}","request-path":"/0/members/87349ef525ad2fc2/attributes","cluster-id":"cf1dc574e5b9e532","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-29T19:04:02.181582Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T19:04:02.181969Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T19:04:02.182191Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T19:04:02.182498Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-29T19:04:02.182543Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-29T19:04:02.185213Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.72:2379"}
	{"level":"info","ts":"2024-02-29T19:04:02.185322Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cf1dc574e5b9e532","local-member-id":"87349ef525ad2fc2","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T19:04:02.185432Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T19:04:02.18548Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T19:04:02.195449Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-29T19:14:02.246662Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":724}
	{"level":"info","ts":"2024-02-29T19:14:02.250331Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":724,"took":"3.244583ms","hash":1858765987}
	{"level":"info","ts":"2024-02-29T19:14:02.25039Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1858765987,"revision":724,"compact-revision":-1}
	{"level":"warn","ts":"2024-02-29T19:17:15.934824Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.754741ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3441469154048742380 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.50.72\" mod_revision:1118 > success:<request_put:<key:\"/registry/masterleases/192.168.50.72\" value_size:66 lease:3441469154048742378 >> failure:<request_range:<key:\"/registry/masterleases/192.168.50.72\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-02-29T19:17:15.935057Z","caller":"traceutil/trace.go:171","msg":"trace[584242533] linearizableReadLoop","detail":"{readStateIndex:1302; appliedIndex:1301; }","duration":"142.234317ms","start":"2024-02-29T19:17:15.792782Z","end":"2024-02-29T19:17:15.935017Z","steps":["trace[584242533] 'read index received'  (duration: 12.262964ms)","trace[584242533] 'applied index is now lower than readState.Index'  (duration: 129.969867ms)"],"step_count":2}
	{"level":"info","ts":"2024-02-29T19:17:15.935326Z","caller":"traceutil/trace.go:171","msg":"trace[935966519] transaction","detail":"{read_only:false; response_revision:1126; number_of_response:1; }","duration":"194.817161ms","start":"2024-02-29T19:17:15.740473Z","end":"2024-02-29T19:17:15.935291Z","steps":["trace[935966519] 'process raft request'  (duration: 64.606609ms)","trace[935966519] 'compare'  (duration: 128.573144ms)"],"step_count":2}
	{"level":"warn","ts":"2024-02-29T19:17:15.935811Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"143.095278ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-02-29T19:17:15.935882Z","caller":"traceutil/trace.go:171","msg":"trace[1227938759] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1126; }","duration":"143.206735ms","start":"2024-02-29T19:17:15.792662Z","end":"2024-02-29T19:17:15.935869Z","steps":["trace[1227938759] 'agreement among raft nodes before linearized reading'  (duration: 142.428456ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-29T19:17:33.414922Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.032576ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-02-29T19:17:33.415731Z","caller":"traceutil/trace.go:171","msg":"trace[395612955] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1139; }","duration":"104.779878ms","start":"2024-02-29T19:17:33.310844Z","end":"2024-02-29T19:17:33.415624Z","steps":["trace[395612955] 'range keys from in-memory index tree'  (duration: 103.966962ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:17:49 up 19 min,  0 users,  load average: 0.28, 0.18, 0.11
	Linux no-preload-247197 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [6edf3acff7dee1fec0aa396e614c8d822a2a7e2074e2da8863c3427cda994799] <==
	W0229 19:03:52.896347       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 19:03:53.110732       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 19:03:53.194046       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 19:03:53.299251       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 19:03:53.369940       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 19:03:53.471046       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 19:03:53.492031       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 19:03:53.564233       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 19:03:53.564291       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 19:03:53.586961       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 19:03:53.586980       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 19:03:53.671720       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 19:03:53.715114       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 19:03:53.727804       1 logging.go:59] [core] [Channel #15 SubChannel #16] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 19:03:53.842808       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 19:03:53.879601       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 19:03:54.085842       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 19:03:54.112639       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 19:03:54.169809       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 19:03:54.242260       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 19:03:54.347342       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 19:03:54.658736       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 19:03:54.777476       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 19:03:54.838282       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 19:03:54.877257       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [730a369e2636f53e3085f418ca1fe59278c5988e5639018d26088db3e0d29a9a] <==
	I0229 19:12:04.770297       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0229 19:14:03.771985       1 handler_proxy.go:93] no RequestInfo found in the context
	E0229 19:14:03.772421       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0229 19:14:04.772749       1 handler_proxy.go:93] no RequestInfo found in the context
	E0229 19:14:04.772844       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0229 19:14:04.772868       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0229 19:14:04.772975       1 handler_proxy.go:93] no RequestInfo found in the context
	E0229 19:14:04.773386       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0229 19:14:04.774723       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0229 19:15:04.773537       1 handler_proxy.go:93] no RequestInfo found in the context
	E0229 19:15:04.773649       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0229 19:15:04.773713       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0229 19:15:04.775928       1 handler_proxy.go:93] no RequestInfo found in the context
	E0229 19:15:04.776255       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0229 19:15:04.776310       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0229 19:17:04.774352       1 handler_proxy.go:93] no RequestInfo found in the context
	E0229 19:17:04.774466       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0229 19:17:04.774481       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0229 19:17:04.776640       1 handler_proxy.go:93] no RequestInfo found in the context
	E0229 19:17:04.776824       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0229 19:17:04.776860       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [9661e52ccd784fecb3c7d46bee0dd7f089a63bf8729d5990840d79192dc01b35] <==
	I0229 19:12:19.481285       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 19:12:49.019050       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 19:12:49.491018       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 19:13:19.024592       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 19:13:19.500079       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 19:13:49.031796       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 19:13:49.510041       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 19:14:19.038096       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 19:14:19.519820       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 19:14:49.043076       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 19:14:49.528370       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 19:15:19.048674       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 19:15:19.536033       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0229 19:15:23.449574       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="211.713µs"
	I0229 19:15:35.448957       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="135.457µs"
	E0229 19:15:49.054455       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 19:15:49.544207       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 19:16:19.060796       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 19:16:19.554615       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 19:16:49.065856       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 19:16:49.564290       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 19:17:19.072981       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 19:17:19.573587       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 19:17:49.078379       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 19:17:49.582717       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [ecdd7783c174633c8e5cbf63bbf2bd9a803b88800ec269f503d1974990869365] <==
	I0229 19:04:21.239570       1 server_others.go:72] "Using iptables proxy"
	I0229 19:04:21.256607       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.50.72"]
	I0229 19:04:21.392764       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0229 19:04:21.392815       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0229 19:04:21.392832       1 server_others.go:168] "Using iptables Proxier"
	I0229 19:04:21.406319       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0229 19:04:21.406553       1 server.go:865] "Version info" version="v1.29.0-rc.2"
	I0229 19:04:21.406565       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0229 19:04:21.423377       1 config.go:188] "Starting service config controller"
	I0229 19:04:21.423652       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0229 19:04:21.425739       1 config.go:97] "Starting endpoint slice config controller"
	I0229 19:04:21.425852       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0229 19:04:21.431824       1 config.go:315] "Starting node config controller"
	I0229 19:04:21.431959       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0229 19:04:21.525471       1 shared_informer.go:318] Caches are synced for service config
	I0229 19:04:21.526229       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0229 19:04:21.533687       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [2c68222b7809e6dde0a01964d3dcac0dd1789c074f0b42457a8d52ddf09a616a] <==
	W0229 19:04:04.815387       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0229 19:04:04.815446       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0229 19:04:04.888916       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0229 19:04:04.889070       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0229 19:04:04.902473       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0229 19:04:04.902814       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0229 19:04:04.977866       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0229 19:04:04.978003       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0229 19:04:05.049108       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0229 19:04:05.049391       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0229 19:04:05.073903       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0229 19:04:05.074018       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0229 19:04:05.080733       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0229 19:04:05.080814       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0229 19:04:05.103662       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0229 19:04:05.103809       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0229 19:04:05.160627       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0229 19:04:05.160881       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0229 19:04:05.190981       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0229 19:04:05.191050       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0229 19:04:05.194894       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0229 19:04:05.194948       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0229 19:04:05.216449       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0229 19:04:05.216506       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0229 19:04:06.878350       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 29 19:15:09 no-preload-247197 kubelet[4095]: E0229 19:15:09.443244    4095 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Feb 29 19:15:09 no-preload-247197 kubelet[4095]: E0229 19:15:09.443609    4095 kuberuntime_manager.go:1262] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-bzmmr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Pro
beHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:Fi
le,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-nj5h7_kube-system(c53f2987-829e-4bea-8075-16af3a59249f): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Feb 29 19:15:09 no-preload-247197 kubelet[4095]: E0229 19:15:09.443648    4095 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-nj5h7" podUID="c53f2987-829e-4bea-8075-16af3a59249f"
	Feb 29 19:15:23 no-preload-247197 kubelet[4095]: E0229 19:15:23.429408    4095 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nj5h7" podUID="c53f2987-829e-4bea-8075-16af3a59249f"
	Feb 29 19:15:35 no-preload-247197 kubelet[4095]: E0229 19:15:35.429752    4095 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nj5h7" podUID="c53f2987-829e-4bea-8075-16af3a59249f"
	Feb 29 19:15:49 no-preload-247197 kubelet[4095]: E0229 19:15:49.428952    4095 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nj5h7" podUID="c53f2987-829e-4bea-8075-16af3a59249f"
	Feb 29 19:16:02 no-preload-247197 kubelet[4095]: E0229 19:16:02.429726    4095 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nj5h7" podUID="c53f2987-829e-4bea-8075-16af3a59249f"
	Feb 29 19:16:07 no-preload-247197 kubelet[4095]: E0229 19:16:07.511956    4095 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 29 19:16:07 no-preload-247197 kubelet[4095]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 29 19:16:07 no-preload-247197 kubelet[4095]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 29 19:16:07 no-preload-247197 kubelet[4095]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 29 19:16:07 no-preload-247197 kubelet[4095]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 29 19:16:13 no-preload-247197 kubelet[4095]: E0229 19:16:13.432787    4095 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nj5h7" podUID="c53f2987-829e-4bea-8075-16af3a59249f"
	Feb 29 19:16:27 no-preload-247197 kubelet[4095]: E0229 19:16:27.429684    4095 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nj5h7" podUID="c53f2987-829e-4bea-8075-16af3a59249f"
	Feb 29 19:16:40 no-preload-247197 kubelet[4095]: E0229 19:16:40.428907    4095 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nj5h7" podUID="c53f2987-829e-4bea-8075-16af3a59249f"
	Feb 29 19:16:53 no-preload-247197 kubelet[4095]: E0229 19:16:53.429200    4095 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nj5h7" podUID="c53f2987-829e-4bea-8075-16af3a59249f"
	Feb 29 19:17:04 no-preload-247197 kubelet[4095]: E0229 19:17:04.429015    4095 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nj5h7" podUID="c53f2987-829e-4bea-8075-16af3a59249f"
	Feb 29 19:17:07 no-preload-247197 kubelet[4095]: E0229 19:17:07.512490    4095 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 29 19:17:07 no-preload-247197 kubelet[4095]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 29 19:17:07 no-preload-247197 kubelet[4095]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 29 19:17:07 no-preload-247197 kubelet[4095]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 29 19:17:07 no-preload-247197 kubelet[4095]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 29 19:17:17 no-preload-247197 kubelet[4095]: E0229 19:17:17.430912    4095 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nj5h7" podUID="c53f2987-829e-4bea-8075-16af3a59249f"
	Feb 29 19:17:28 no-preload-247197 kubelet[4095]: E0229 19:17:28.429295    4095 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nj5h7" podUID="c53f2987-829e-4bea-8075-16af3a59249f"
	Feb 29 19:17:41 no-preload-247197 kubelet[4095]: E0229 19:17:41.431416    4095 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nj5h7" podUID="c53f2987-829e-4bea-8075-16af3a59249f"
	
	
	==> storage-provisioner [c77d304aa104bed3e19645c4702145c26a9b0c8580d21f085e5eac06fa73ca2c] <==
	I0229 19:04:22.525311       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0229 19:04:22.538520       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0229 19:04:22.538631       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0229 19:04:22.546862       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0229 19:04:22.547049       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-247197_53d8800c-14e3-4c7d-ab0a-ad66790b746b!
	I0229 19:04:22.548893       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4d1ca69e-678d-46db-bce2-7a4947442015", APIVersion:"v1", ResourceVersion:"459", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-247197_53d8800c-14e3-4c7d-ab0a-ad66790b746b became leader
	I0229 19:04:22.650285       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-247197_53d8800c-14e3-4c7d-ab0a-ad66790b746b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-247197 -n no-preload-247197
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-247197 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-nj5h7
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-247197 describe pod metrics-server-57f55c9bc5-nj5h7
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-247197 describe pod metrics-server-57f55c9bc5-nj5h7: exit status 1 (70.709596ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-nj5h7" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-247197 describe pod metrics-server-57f55c9bc5-nj5h7: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (12.41s)

                                                
                                    

Test pass (236/304)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 60.08
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.07
9 TestDownloadOnly/v1.16.0/DeleteAll 0.14
10 TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.28.4/json-events 49.81
13 TestDownloadOnly/v1.28.4/preload-exists 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.07
18 TestDownloadOnly/v1.28.4/DeleteAll 0.13
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.13
21 TestDownloadOnly/v1.29.0-rc.2/json-events 58.06
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.07
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.13
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.13
30 TestBinaryMirror 0.57
31 TestOffline 65.24
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 151.24
38 TestAddons/parallel/Registry 17.69
40 TestAddons/parallel/InspektorGadget 10.98
41 TestAddons/parallel/MetricsServer 6.99
42 TestAddons/parallel/HelmTiller 11.61
44 TestAddons/parallel/CSI 66.38
45 TestAddons/parallel/Headlamp 15.27
46 TestAddons/parallel/CloudSpanner 5.97
47 TestAddons/parallel/LocalPath 55.85
48 TestAddons/parallel/NvidiaDevicePlugin 5.75
49 TestAddons/parallel/Yakd 5.01
52 TestAddons/serial/GCPAuth/Namespaces 0.12
54 TestCertOptions 76.61
55 TestCertExpiration 272.97
57 TestForceSystemdFlag 60.95
58 TestForceSystemdEnv 54.78
60 TestKVMDriverInstallOrUpdate 4.65
64 TestErrorSpam/setup 44.8
65 TestErrorSpam/start 0.36
66 TestErrorSpam/status 0.76
67 TestErrorSpam/pause 1.59
68 TestErrorSpam/unpause 1.69
69 TestErrorSpam/stop 2.24
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 99.94
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 37.32
76 TestFunctional/serial/KubeContext 0.04
77 TestFunctional/serial/KubectlGetPods 0.07
80 TestFunctional/serial/CacheCmd/cache/add_remote 3.07
81 TestFunctional/serial/CacheCmd/cache/add_local 2.25
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
83 TestFunctional/serial/CacheCmd/cache/list 0.06
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
85 TestFunctional/serial/CacheCmd/cache/cache_reload 1.64
86 TestFunctional/serial/CacheCmd/cache/delete 0.11
87 TestFunctional/serial/MinikubeKubectlCmd 0.12
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
89 TestFunctional/serial/ExtraConfig 33.72
90 TestFunctional/serial/ComponentHealth 0.07
91 TestFunctional/serial/LogsCmd 1.45
92 TestFunctional/serial/LogsFileCmd 1.54
93 TestFunctional/serial/InvalidService 3.93
95 TestFunctional/parallel/ConfigCmd 0.39
96 TestFunctional/parallel/DashboardCmd 19.63
97 TestFunctional/parallel/DryRun 0.3
98 TestFunctional/parallel/InternationalLanguage 0.16
99 TestFunctional/parallel/StatusCmd 1.31
103 TestFunctional/parallel/ServiceCmdConnect 10.75
104 TestFunctional/parallel/AddonsCmd 0.14
105 TestFunctional/parallel/PersistentVolumeClaim 39.44
107 TestFunctional/parallel/SSHCmd 0.42
108 TestFunctional/parallel/CpCmd 1.4
109 TestFunctional/parallel/MySQL 32.33
110 TestFunctional/parallel/FileSync 0.31
111 TestFunctional/parallel/CertSync 1.38
115 TestFunctional/parallel/NodeLabels 0.06
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.47
119 TestFunctional/parallel/License 0.65
120 TestFunctional/parallel/Version/short 0.06
121 TestFunctional/parallel/Version/components 0.8
122 TestFunctional/parallel/ImageCommands/ImageListShort 0.31
123 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
124 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
125 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
126 TestFunctional/parallel/ImageCommands/ImageBuild 4.89
127 TestFunctional/parallel/ImageCommands/Setup 2.18
128 TestFunctional/parallel/ServiceCmd/DeployApp 12.19
138 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.57
139 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.68
140 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 10.09
141 TestFunctional/parallel/ServiceCmd/List 0.43
142 TestFunctional/parallel/ServiceCmd/JSONOutput 0.4
143 TestFunctional/parallel/ServiceCmd/HTTPS 0.51
144 TestFunctional/parallel/ProfileCmd/profile_not_create 0.41
145 TestFunctional/parallel/ProfileCmd/profile_list 0.38
146 TestFunctional/parallel/ServiceCmd/Format 0.46
147 TestFunctional/parallel/ProfileCmd/profile_json_output 0.4
148 TestFunctional/parallel/ServiceCmd/URL 0.46
149 TestFunctional/parallel/MountCmd/any-port 9.84
150 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.35
151 TestFunctional/parallel/ImageCommands/ImageRemove 0.53
152 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.69
153 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.56
154 TestFunctional/parallel/MountCmd/specific-port 1.97
155 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
156 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
157 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.11
158 TestFunctional/parallel/MountCmd/VerifyCleanup 1.45
159 TestFunctional/delete_addon-resizer_images 0.06
160 TestFunctional/delete_my-image_image 0.01
161 TestFunctional/delete_minikube_cached_images 0.01
172 TestJSONOutput/start/Command 98.36
173 TestJSONOutput/start/Audit 0
175 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
176 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
178 TestJSONOutput/pause/Command 0.8
179 TestJSONOutput/pause/Audit 0
181 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
182 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
184 TestJSONOutput/unpause/Command 0.7
185 TestJSONOutput/unpause/Audit 0
187 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/stop/Command 7.11
191 TestJSONOutput/stop/Audit 0
193 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
195 TestErrorJSONOutput 0.2
200 TestMainNoArgs 0.05
201 TestMinikubeProfile 90.82
204 TestMountStart/serial/StartWithMountFirst 27.99
205 TestMountStart/serial/VerifyMountFirst 0.38
206 TestMountStart/serial/StartWithMountSecond 28.22
207 TestMountStart/serial/VerifyMountSecond 0.39
208 TestMountStart/serial/DeleteFirst 0.66
209 TestMountStart/serial/VerifyMountPostDelete 0.39
210 TestMountStart/serial/Stop 1.21
211 TestMountStart/serial/RestartStopped 23.01
212 TestMountStart/serial/VerifyMountPostStop 0.38
215 TestMultiNode/serial/FreshStart2Nodes 181.44
216 TestMultiNode/serial/DeployApp2Nodes 6.42
217 TestMultiNode/serial/PingHostFrom2Pods 0.84
218 TestMultiNode/serial/AddNode 39.69
219 TestMultiNode/serial/MultiNodeLabels 0.06
220 TestMultiNode/serial/ProfileList 0.21
221 TestMultiNode/serial/CopyFile 7.48
222 TestMultiNode/serial/StopNode 2.97
223 TestMultiNode/serial/StartAfterStop 28.89
225 TestMultiNode/serial/DeleteNode 1.53
227 TestMultiNode/serial/RestartMultiNode 447.39
228 TestMultiNode/serial/ValidateNameConflict 47.2
235 TestScheduledStopUnix 120.72
239 TestRunningBinaryUpgrade 209.5
244 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
245 TestNoKubernetes/serial/StartWithK8s 102.27
246 TestStoppedBinaryUpgrade/Setup 2.59
247 TestStoppedBinaryUpgrade/Upgrade 152.43
248 TestNoKubernetes/serial/StartWithStopK8s 40.89
249 TestNoKubernetes/serial/Start 32.54
250 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
251 TestNoKubernetes/serial/ProfileList 14.13
252 TestNoKubernetes/serial/Stop 1.19
253 TestNoKubernetes/serial/StartNoArgs 27.23
254 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.22
258 TestStoppedBinaryUpgrade/MinikubeLogs 1.08
263 TestNetworkPlugins/group/false 3.39
275 TestPause/serial/Start 138.12
280 TestStartStop/group/no-preload/serial/FirstStart 116.66
282 TestStartStop/group/embed-certs/serial/FirstStart 101.47
283 TestStartStop/group/no-preload/serial/DeployApp 10.33
285 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 69.53
286 TestStartStop/group/embed-certs/serial/DeployApp 10.29
287 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.28
289 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.15
291 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.3
292 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.09
298 TestStartStop/group/no-preload/serial/SecondStart 968
299 TestStartStop/group/embed-certs/serial/SecondStart 878.96
300 TestStartStop/group/old-k8s-version/serial/Stop 1.28
301 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
304 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 859.05
312 TestStartStop/group/newest-cni/serial/FirstStart 59.93
315 TestNetworkPlugins/group/auto/Start 100.02
316 TestStartStop/group/newest-cni/serial/DeployApp 0
317 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.13
318 TestStartStop/group/newest-cni/serial/Stop 10.12
319 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
320 TestStartStop/group/newest-cni/serial/SecondStart 53.9
321 TestNetworkPlugins/group/kindnet/Start 88.87
322 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
323 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
324 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.33
325 TestStartStop/group/newest-cni/serial/Pause 2.93
326 TestNetworkPlugins/group/calico/Start 98.59
327 TestNetworkPlugins/group/auto/KubeletFlags 0.22
328 TestNetworkPlugins/group/auto/NetCatPod 10.25
329 TestNetworkPlugins/group/auto/DNS 0.19
330 TestNetworkPlugins/group/auto/Localhost 0.23
331 TestNetworkPlugins/group/auto/HairPin 0.35
332 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
333 TestNetworkPlugins/group/kindnet/KubeletFlags 0.27
334 TestNetworkPlugins/group/kindnet/NetCatPod 12.33
335 TestNetworkPlugins/group/custom-flannel/Start 91.85
336 TestNetworkPlugins/group/enable-default-cni/Start 91.94
337 TestNetworkPlugins/group/kindnet/DNS 0.18
338 TestNetworkPlugins/group/kindnet/Localhost 0.15
339 TestNetworkPlugins/group/kindnet/HairPin 0.19
340 TestNetworkPlugins/group/flannel/Start 123.4
341 TestNetworkPlugins/group/calico/ControllerPod 6.01
342 TestNetworkPlugins/group/calico/KubeletFlags 0.34
343 TestNetworkPlugins/group/calico/NetCatPod 14.59
344 TestNetworkPlugins/group/calico/DNS 0.19
345 TestNetworkPlugins/group/calico/Localhost 0.15
346 TestNetworkPlugins/group/calico/HairPin 0.17
347 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.31
348 TestNetworkPlugins/group/custom-flannel/NetCatPod 14.35
349 TestNetworkPlugins/group/bridge/Start 100.82
350 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.23
351 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.35
352 TestNetworkPlugins/group/custom-flannel/DNS 0.26
353 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
354 TestNetworkPlugins/group/custom-flannel/HairPin 0.2
355 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
356 TestNetworkPlugins/group/enable-default-cni/Localhost 0.19
357 TestNetworkPlugins/group/enable-default-cni/HairPin 0.2
358 TestNetworkPlugins/group/flannel/ControllerPod 6.01
359 TestNetworkPlugins/group/flannel/KubeletFlags 0.21
360 TestNetworkPlugins/group/flannel/NetCatPod 11.23
361 TestNetworkPlugins/group/flannel/DNS 0.16
362 TestNetworkPlugins/group/flannel/Localhost 0.15
363 TestNetworkPlugins/group/flannel/HairPin 0.16
364 TestNetworkPlugins/group/bridge/KubeletFlags 0.21
365 TestNetworkPlugins/group/bridge/NetCatPod 10.21
366 TestNetworkPlugins/group/bridge/DNS 0.17
367 TestNetworkPlugins/group/bridge/Localhost 0.13
368 TestNetworkPlugins/group/bridge/HairPin 0.14
x
+
TestDownloadOnly/v1.16.0/json-events (60.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-392053 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-392053 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (1m0.081877135s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (60.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-392053
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-392053: exit status 85 (72.479092ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-392053 | jenkins | v1.32.0 | 29 Feb 24 17:37 UTC |          |
	|         | -p download-only-392053        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 17:37:22
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 17:37:22.602914   13663 out.go:291] Setting OutFile to fd 1 ...
	I0229 17:37:22.603178   13663 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 17:37:22.603187   13663 out.go:304] Setting ErrFile to fd 2...
	I0229 17:37:22.603191   13663 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 17:37:22.603386   13663 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6428/.minikube/bin
	W0229 17:37:22.603499   13663 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18259-6428/.minikube/config/config.json: open /home/jenkins/minikube-integration/18259-6428/.minikube/config/config.json: no such file or directory
	I0229 17:37:22.604051   13663 out.go:298] Setting JSON to true
	I0229 17:37:22.604881   13663 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1187,"bootTime":1709227056,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 17:37:22.604944   13663 start.go:139] virtualization: kvm guest
	I0229 17:37:22.607406   13663 out.go:97] [download-only-392053] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 17:37:22.609042   13663 out.go:169] MINIKUBE_LOCATION=18259
	W0229 17:37:22.607519   13663 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/18259-6428/.minikube/cache/preloaded-tarball: no such file or directory
	I0229 17:37:22.607585   13663 notify.go:220] Checking for updates...
	I0229 17:37:22.611685   13663 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 17:37:22.613238   13663 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18259-6428/kubeconfig
	I0229 17:37:22.614603   13663 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6428/.minikube
	I0229 17:37:22.615977   13663 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0229 17:37:22.618558   13663 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0229 17:37:22.618778   13663 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 17:37:22.714880   13663 out.go:97] Using the kvm2 driver based on user configuration
	I0229 17:37:22.714924   13663 start.go:299] selected driver: kvm2
	I0229 17:37:22.714933   13663 start.go:903] validating driver "kvm2" against <nil>
	I0229 17:37:22.715263   13663 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 17:37:22.715379   13663 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18259-6428/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 17:37:22.729074   13663 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 17:37:22.729115   13663 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 17:37:22.729561   13663 start_flags.go:394] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0229 17:37:22.729727   13663 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0229 17:37:22.729789   13663 cni.go:84] Creating CNI manager for ""
	I0229 17:37:22.729802   13663 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 17:37:22.729810   13663 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0229 17:37:22.729817   13663 start_flags.go:323] config:
	{Name:download-only-392053 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-392053 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 17:37:22.730001   13663 iso.go:125] acquiring lock: {Name:mk4b55cee5696fff78a1389276e4e011ad655e56 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 17:37:22.731731   13663 out.go:97] Downloading VM boot image ...
	I0229 17:37:22.731762   13663 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso.sha256 -> /home/jenkins/minikube-integration/18259-6428/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso
	I0229 17:37:32.463149   13663 out.go:97] Starting control plane node download-only-392053 in cluster download-only-392053
	I0229 17:37:32.463188   13663 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0229 17:37:32.573480   13663 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0229 17:37:32.573516   13663 cache.go:56] Caching tarball of preloaded images
	I0229 17:37:32.573707   13663 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0229 17:37:32.575796   13663 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0229 17:37:32.575810   13663 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I0229 17:37:33.114575   13663 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:432b600409d778ea7a21214e83948570 -> /home/jenkins/minikube-integration/18259-6428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0229 17:37:47.021975   13663 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I0229 17:37:47.022064   13663 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/18259-6428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I0229 17:37:47.860121   13663 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I0229 17:37:47.860445   13663 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/download-only-392053/config.json ...
	I0229 17:37:47.860474   13663 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/download-only-392053/config.json: {Name:mk135a4dc6b74de4f717ace97218cfe389ed41f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 17:37:47.860616   13663 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0229 17:37:47.860802   13663 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/18259-6428/.minikube/cache/linux/amd64/v1.16.0/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-392053"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.16.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-392053
--- PASS: TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (49.81s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-181797 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-181797 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (49.808657404s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (49.81s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-181797
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-181797: exit status 85 (68.990086ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-392053 | jenkins | v1.32.0 | 29 Feb 24 17:37 UTC |                     |
	|         | -p download-only-392053        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 29 Feb 24 17:38 UTC | 29 Feb 24 17:38 UTC |
	| delete  | -p download-only-392053        | download-only-392053 | jenkins | v1.32.0 | 29 Feb 24 17:38 UTC | 29 Feb 24 17:38 UTC |
	| start   | -o=json --download-only        | download-only-181797 | jenkins | v1.32.0 | 29 Feb 24 17:38 UTC |                     |
	|         | -p download-only-181797        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 17:38:23
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 17:38:23.022307   13957 out.go:291] Setting OutFile to fd 1 ...
	I0229 17:38:23.022424   13957 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 17:38:23.022435   13957 out.go:304] Setting ErrFile to fd 2...
	I0229 17:38:23.022440   13957 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 17:38:23.022628   13957 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6428/.minikube/bin
	I0229 17:38:23.023208   13957 out.go:298] Setting JSON to true
	I0229 17:38:23.024110   13957 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1247,"bootTime":1709227056,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 17:38:23.024177   13957 start.go:139] virtualization: kvm guest
	I0229 17:38:23.026290   13957 out.go:97] [download-only-181797] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 17:38:23.027784   13957 out.go:169] MINIKUBE_LOCATION=18259
	I0229 17:38:23.026441   13957 notify.go:220] Checking for updates...
	I0229 17:38:23.030318   13957 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 17:38:23.031569   13957 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18259-6428/kubeconfig
	I0229 17:38:23.032815   13957 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6428/.minikube
	I0229 17:38:23.033937   13957 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0229 17:38:23.036194   13957 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0229 17:38:23.036478   13957 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 17:38:23.066788   13957 out.go:97] Using the kvm2 driver based on user configuration
	I0229 17:38:23.066822   13957 start.go:299] selected driver: kvm2
	I0229 17:38:23.066830   13957 start.go:903] validating driver "kvm2" against <nil>
	I0229 17:38:23.067164   13957 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 17:38:23.067261   13957 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18259-6428/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 17:38:23.081689   13957 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 17:38:23.081749   13957 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 17:38:23.082224   13957 start_flags.go:394] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0229 17:38:23.082380   13957 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0229 17:38:23.082469   13957 cni.go:84] Creating CNI manager for ""
	I0229 17:38:23.082485   13957 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 17:38:23.082497   13957 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0229 17:38:23.082524   13957 start_flags.go:323] config:
	{Name:download-only-181797 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-181797 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 17:38:23.082700   13957 iso.go:125] acquiring lock: {Name:mk4b55cee5696fff78a1389276e4e011ad655e56 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 17:38:23.084365   13957 out.go:97] Starting control plane node download-only-181797 in cluster download-only-181797
	I0229 17:38:23.084380   13957 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0229 17:38:23.196734   13957 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0229 17:38:23.196756   13957 cache.go:56] Caching tarball of preloaded images
	I0229 17:38:23.196889   13957 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0229 17:38:23.198616   13957 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0229 17:38:23.198633   13957 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I0229 17:38:23.305116   13957 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b0bd7b3b222c094c365d9c9e10e48fc7 -> /home/jenkins/minikube-integration/18259-6428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0229 17:38:37.752519   13957 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I0229 17:38:37.752617   13957 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/18259-6428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I0229 17:38:38.626463   13957 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0229 17:38:38.626804   13957 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/download-only-181797/config.json ...
	I0229 17:38:38.626839   13957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/download-only-181797/config.json: {Name:mk1da5b5c32716d9d33e38a6a2d89cf9bb58a94e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 17:38:38.627040   13957 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0229 17:38:38.627216   13957 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/18259-6428/.minikube/cache/linux/amd64/v1.28.4/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-181797"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-181797
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (58.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-928093 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-928093 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (58.064460073s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (58.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-928093
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-928093: exit status 85 (70.972785ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-392053 | jenkins | v1.32.0 | 29 Feb 24 17:37 UTC |                     |
	|         | -p download-only-392053           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 29 Feb 24 17:38 UTC | 29 Feb 24 17:38 UTC |
	| delete  | -p download-only-392053           | download-only-392053 | jenkins | v1.32.0 | 29 Feb 24 17:38 UTC | 29 Feb 24 17:38 UTC |
	| start   | -o=json --download-only           | download-only-181797 | jenkins | v1.32.0 | 29 Feb 24 17:38 UTC |                     |
	|         | -p download-only-181797           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 29 Feb 24 17:39 UTC | 29 Feb 24 17:39 UTC |
	| delete  | -p download-only-181797           | download-only-181797 | jenkins | v1.32.0 | 29 Feb 24 17:39 UTC | 29 Feb 24 17:39 UTC |
	| start   | -o=json --download-only           | download-only-928093 | jenkins | v1.32.0 | 29 Feb 24 17:39 UTC |                     |
	|         | -p download-only-928093           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 17:39:13
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 17:39:13.161322   14216 out.go:291] Setting OutFile to fd 1 ...
	I0229 17:39:13.161451   14216 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 17:39:13.161460   14216 out.go:304] Setting ErrFile to fd 2...
	I0229 17:39:13.161465   14216 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 17:39:13.161657   14216 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6428/.minikube/bin
	I0229 17:39:13.162198   14216 out.go:298] Setting JSON to true
	I0229 17:39:13.163051   14216 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1297,"bootTime":1709227056,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 17:39:13.163115   14216 start.go:139] virtualization: kvm guest
	I0229 17:39:13.165265   14216 out.go:97] [download-only-928093] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 17:39:13.166921   14216 out.go:169] MINIKUBE_LOCATION=18259
	I0229 17:39:13.165469   14216 notify.go:220] Checking for updates...
	I0229 17:39:13.169545   14216 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 17:39:13.170867   14216 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18259-6428/kubeconfig
	I0229 17:39:13.172039   14216 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6428/.minikube
	I0229 17:39:13.173273   14216 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0229 17:39:13.175506   14216 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0229 17:39:13.175698   14216 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 17:39:13.205313   14216 out.go:97] Using the kvm2 driver based on user configuration
	I0229 17:39:13.205340   14216 start.go:299] selected driver: kvm2
	I0229 17:39:13.205347   14216 start.go:903] validating driver "kvm2" against <nil>
	I0229 17:39:13.205746   14216 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 17:39:13.205841   14216 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18259-6428/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 17:39:13.220395   14216 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 17:39:13.220457   14216 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 17:39:13.220920   14216 start_flags.go:394] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0229 17:39:13.221040   14216 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0229 17:39:13.221104   14216 cni.go:84] Creating CNI manager for ""
	I0229 17:39:13.221116   14216 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 17:39:13.221124   14216 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0229 17:39:13.221134   14216 start_flags.go:323] config:
	{Name:download-only-928093 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-928093 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 17:39:13.221269   14216 iso.go:125] acquiring lock: {Name:mk4b55cee5696fff78a1389276e4e011ad655e56 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 17:39:13.222910   14216 out.go:97] Starting control plane node download-only-928093 in cluster download-only-928093
	I0229 17:39:13.222931   14216 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0229 17:39:13.329382   14216 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0229 17:39:13.329407   14216 cache.go:56] Caching tarball of preloaded images
	I0229 17:39:13.329561   14216 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0229 17:39:13.331399   14216 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0229 17:39:13.331433   14216 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I0229 17:39:13.444137   14216 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:9e0f57288adacc30aad3ff7e72a8dc68 -> /home/jenkins/minikube-integration/18259-6428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0229 17:39:37.331573   14216 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I0229 17:39:37.331665   14216 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/18259-6428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I0229 17:39:38.092609   14216 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on crio
	I0229 17:39:38.092919   14216 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/download-only-928093/config.json ...
	I0229 17:39:38.092946   14216 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/download-only-928093/config.json: {Name:mk709aab6b6dcc02ab013bb048cc18047a25c25e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 17:39:38.093093   14216 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0229 17:39:38.093215   14216 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/18259-6428/.minikube/cache/linux/amd64/v1.29.0-rc.2/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-928093"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-928093
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.57s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-104722 --alsologtostderr --binary-mirror http://127.0.0.1:36219 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-104722" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-104722
--- PASS: TestBinaryMirror (0.57s)

                                                
                                    
x
+
TestOffline (65.24s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-487627 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-487627 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m4.206004143s)
helpers_test.go:175: Cleaning up "offline-crio-487627" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-487627
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-487627: (1.029139194s)
--- PASS: TestOffline (65.24s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-848237
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-848237: exit status 85 (63.545698ms)

                                                
                                                
-- stdout --
	* Profile "addons-848237" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-848237"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-848237
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-848237: exit status 85 (64.500202ms)

                                                
                                                
-- stdout --
	* Profile "addons-848237" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-848237"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (151.24s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-848237 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-848237 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m31.244368891s)
--- PASS: TestAddons/Setup (151.24s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.69s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 31.325823ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-ztkrj" [c7f086f8-8e7c-4a01-88e8-e51d7edef88b] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004373401s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-676t7" [9371b307-e44d-4f2a-ba6a-e6c43f46f6e3] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005366716s
addons_test.go:340: (dbg) Run:  kubectl --context addons-848237 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-848237 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-848237 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.091803708s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-848237 ip
2024/02/29 17:43:00 [DEBUG] GET http://192.168.39.195:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-848237 addons disable registry --alsologtostderr -v=1
addons_test.go:388: (dbg) Done: out/minikube-linux-amd64 -p addons-848237 addons disable registry --alsologtostderr -v=1: (1.392284622s)
--- PASS: TestAddons/parallel/Registry (17.69s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.98s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-pxlgz" [8caf76c4-9658-4723-9b62-6e71c37f27e3] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004826494s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-848237
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-848237: (5.977726611s)
--- PASS: TestAddons/parallel/InspektorGadget (10.98s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.99s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 31.488333ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-69cf46c98-rhml2" [b0f01afc-d498-421c-8612-a6deac805806] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.006123224s
addons_test.go:415: (dbg) Run:  kubectl --context addons-848237 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-848237 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.99s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.61s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 6.538428ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-w4gtk" [fd013549-9d6e-4dae-8e37-ecc25403919b] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.008050211s
addons_test.go:473: (dbg) Run:  kubectl --context addons-848237 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-848237 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.968419898s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-848237 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.61s)

                                                
                                    
x
+
TestAddons/parallel/CSI (66.38s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 34.947795ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-848237 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848237 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848237 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848237 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848237 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848237 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848237 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848237 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848237 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848237 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848237 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848237 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848237 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848237 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848237 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848237 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848237 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848237 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848237 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848237 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848237 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848237 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848237 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848237 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848237 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848237 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848237 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848237 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-848237 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [c512e705-956b-4ca2-8e51-e59e79e1b1e5] Pending
helpers_test.go:344: "task-pv-pod" [c512e705-956b-4ca2-8e51-e59e79e1b1e5] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [c512e705-956b-4ca2-8e51-e59e79e1b1e5] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.011712373s
addons_test.go:584: (dbg) Run:  kubectl --context addons-848237 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-848237 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-848237 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-848237 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-848237 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-848237 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848237 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848237 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848237 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848237 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848237 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848237 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-848237 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [400bdb06-19c8-4ca6-9463-a8632c480d4a] Pending
helpers_test.go:344: "task-pv-pod-restore" [400bdb06-19c8-4ca6-9463-a8632c480d4a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [400bdb06-19c8-4ca6-9463-a8632c480d4a] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.00454778s
addons_test.go:626: (dbg) Run:  kubectl --context addons-848237 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-848237 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-848237 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-848237 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-848237 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.855768349s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-848237 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (66.38s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-848237 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-848237 --alsologtostderr -v=1: (1.26543535s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-7tv5l" [51e32711-5762-4a9b-934a-dcb5b85938af] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-7tv5l" [51e32711-5762-4a9b-934a-dcb5b85938af] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.004168871s
--- PASS: TestAddons/parallel/Headlamp (15.27s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.97s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6548d5df46-b5hqq" [2708efc6-39f3-44a2-a79c-b45b18b7548b] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004378241s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-848237
--- PASS: TestAddons/parallel/CloudSpanner (5.97s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (55.85s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-848237 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-848237 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848237 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848237 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848237 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848237 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848237 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848237 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-848237 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [b0f12370-03e2-4095-8dd1-f2f93495f765] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [b0f12370-03e2-4095-8dd1-f2f93495f765] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [b0f12370-03e2-4095-8dd1-f2f93495f765] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.004311642s
addons_test.go:891: (dbg) Run:  kubectl --context addons-848237 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-848237 ssh "cat /opt/local-path-provisioner/pvc-1474dba4-8760-495c-bcc0-f8b3ca2ce82e_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-848237 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-848237 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-848237 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-amd64 -p addons-848237 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.920709859s)
--- PASS: TestAddons/parallel/LocalPath (55.85s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.75s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-zd2r4" [a3ce85f6-cadd-4e86-b3db-77445eb8f021] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.00480772s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-848237
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.75s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-s7mv2" [7b17e5c9-b2c3-48df-bac5-526e28913fda] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004284923s
--- PASS: TestAddons/parallel/Yakd (5.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-848237 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-848237 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestCertOptions (76.61s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-009676 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-009676 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m15.191672726s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-009676 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-009676 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-009676 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-009676" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-009676
--- PASS: TestCertOptions (76.61s)

                                                
                                    
x
+
TestCertExpiration (272.97s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-393248 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-393248 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m14.014539131s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-393248 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-393248 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (17.910105542s)
helpers_test.go:175: Cleaning up "cert-expiration-393248" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-393248
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-393248: (1.042100002s)
--- PASS: TestCertExpiration (272.97s)

                                                
                                    
x
+
TestForceSystemdFlag (60.95s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-297898 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-297898 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (59.972053362s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-297898 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-297898" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-297898
--- PASS: TestForceSystemdFlag (60.95s)

                                                
                                    
x
+
TestForceSystemdEnv (54.78s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-588905 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-588905 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (53.768280307s)
helpers_test.go:175: Cleaning up "force-systemd-env-588905" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-588905
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-588905: (1.011296163s)
--- PASS: TestForceSystemdEnv (54.78s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.65s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.65s)

                                                
                                    
x
+
TestErrorSpam/setup (44.8s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-631438 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-631438 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-631438 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-631438 --driver=kvm2  --container-runtime=crio: (44.802431708s)
--- PASS: TestErrorSpam/setup (44.80s)

                                                
                                    
x
+
TestErrorSpam/start (0.36s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-631438 --log_dir /tmp/nospam-631438 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-631438 --log_dir /tmp/nospam-631438 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-631438 --log_dir /tmp/nospam-631438 start --dry-run
--- PASS: TestErrorSpam/start (0.36s)

                                                
                                    
x
+
TestErrorSpam/status (0.76s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-631438 --log_dir /tmp/nospam-631438 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-631438 --log_dir /tmp/nospam-631438 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-631438 --log_dir /tmp/nospam-631438 status
--- PASS: TestErrorSpam/status (0.76s)

                                                
                                    
x
+
TestErrorSpam/pause (1.59s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-631438 --log_dir /tmp/nospam-631438 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-631438 --log_dir /tmp/nospam-631438 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-631438 --log_dir /tmp/nospam-631438 pause
--- PASS: TestErrorSpam/pause (1.59s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.69s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-631438 --log_dir /tmp/nospam-631438 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-631438 --log_dir /tmp/nospam-631438 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-631438 --log_dir /tmp/nospam-631438 unpause
--- PASS: TestErrorSpam/unpause (1.69s)

                                                
                                    
x
+
TestErrorSpam/stop (2.24s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-631438 --log_dir /tmp/nospam-631438 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-631438 --log_dir /tmp/nospam-631438 stop: (2.087610579s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-631438 --log_dir /tmp/nospam-631438 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-631438 --log_dir /tmp/nospam-631438 stop
--- PASS: TestErrorSpam/stop (2.24s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18259-6428/.minikube/files/etc/test/nested/copy/13651/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (99.94s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-531072 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-531072 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m39.939539988s)
--- PASS: TestFunctional/serial/StartWithProxy (99.94s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (37.32s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-531072 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-531072 --alsologtostderr -v=8: (37.318223714s)
functional_test.go:659: soft start took 37.318939304s for "functional-531072" cluster.
--- PASS: TestFunctional/serial/SoftStart (37.32s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-531072 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-531072 cache add registry.k8s.io/pause:3.3: (1.11413136s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-531072 cache add registry.k8s.io/pause:latest: (1.012794342s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-531072 /tmp/TestFunctionalserialCacheCmdcacheadd_local4214173448/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 cache add minikube-local-cache-test:functional-531072
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-531072 cache add minikube-local-cache-test:functional-531072: (1.912931685s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 cache delete minikube-local-cache-test:functional-531072
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-531072
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.64s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-531072 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (219.429842ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.64s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 kubectl -- --context functional-531072 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-531072 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (33.72s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-531072 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-531072 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.723502059s)
functional_test.go:757: restart took 33.723664283s for "functional-531072" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (33.72s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-531072 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.45s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-531072 logs: (1.445852424s)
--- PASS: TestFunctional/serial/LogsCmd (1.45s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.54s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 logs --file /tmp/TestFunctionalserialLogsFileCmd1587217774/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-531072 logs --file /tmp/TestFunctionalserialLogsFileCmd1587217774/001/logs.txt: (1.534259937s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.54s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.93s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-531072 apply -f testdata/invalidsvc.yaml
E0229 17:52:43.785697   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/addons-848237/client.crt: no such file or directory
E0229 17:52:43.791407   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/addons-848237/client.crt: no such file or directory
E0229 17:52:43.801668   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/addons-848237/client.crt: no such file or directory
E0229 17:52:43.821916   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/addons-848237/client.crt: no such file or directory
E0229 17:52:43.862205   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/addons-848237/client.crt: no such file or directory
E0229 17:52:43.942483   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/addons-848237/client.crt: no such file or directory
E0229 17:52:44.102937   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/addons-848237/client.crt: no such file or directory
E0229 17:52:44.423539   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/addons-848237/client.crt: no such file or directory
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-531072
E0229 17:52:45.064668   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/addons-848237/client.crt: no such file or directory
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-531072: exit status 115 (280.518621ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.193:31782 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-531072 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.93s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-531072 config get cpus: exit status 14 (61.506843ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-531072 config get cpus: exit status 14 (64.455231ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (19.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-531072 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-531072 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 21673: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (19.63s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-531072 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-531072 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (151.133562ms)

                                                
                                                
-- stdout --
	* [functional-531072] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18259
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18259-6428/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6428/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 17:53:00.969383   21456 out.go:291] Setting OutFile to fd 1 ...
	I0229 17:53:00.969645   21456 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 17:53:00.969656   21456 out.go:304] Setting ErrFile to fd 2...
	I0229 17:53:00.969660   21456 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 17:53:00.969857   21456 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6428/.minikube/bin
	I0229 17:53:00.970383   21456 out.go:298] Setting JSON to false
	I0229 17:53:00.971292   21456 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2125,"bootTime":1709227056,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 17:53:00.971347   21456 start.go:139] virtualization: kvm guest
	I0229 17:53:00.973897   21456 out.go:177] * [functional-531072] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 17:53:00.975900   21456 out.go:177]   - MINIKUBE_LOCATION=18259
	I0229 17:53:00.975765   21456 notify.go:220] Checking for updates...
	I0229 17:53:00.977570   21456 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 17:53:00.979579   21456 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18259-6428/kubeconfig
	I0229 17:53:00.981178   21456 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6428/.minikube
	I0229 17:53:00.982785   21456 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 17:53:00.984339   21456 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 17:53:00.986296   21456 config.go:182] Loaded profile config "functional-531072": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 17:53:00.986890   21456 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 17:53:00.986952   21456 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 17:53:01.001998   21456 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41619
	I0229 17:53:01.002451   21456 main.go:141] libmachine: () Calling .GetVersion
	I0229 17:53:01.003010   21456 main.go:141] libmachine: Using API Version  1
	I0229 17:53:01.003048   21456 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 17:53:01.003456   21456 main.go:141] libmachine: () Calling .GetMachineName
	I0229 17:53:01.003627   21456 main.go:141] libmachine: (functional-531072) Calling .DriverName
	I0229 17:53:01.003875   21456 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 17:53:01.004241   21456 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 17:53:01.004289   21456 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 17:53:01.019294   21456 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36287
	I0229 17:53:01.019753   21456 main.go:141] libmachine: () Calling .GetVersion
	I0229 17:53:01.020423   21456 main.go:141] libmachine: Using API Version  1
	I0229 17:53:01.020459   21456 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 17:53:01.020808   21456 main.go:141] libmachine: () Calling .GetMachineName
	I0229 17:53:01.021033   21456 main.go:141] libmachine: (functional-531072) Calling .DriverName
	I0229 17:53:01.056010   21456 out.go:177] * Using the kvm2 driver based on existing profile
	I0229 17:53:01.057424   21456 start.go:299] selected driver: kvm2
	I0229 17:53:01.057443   21456 start.go:903] validating driver "kvm2" against &{Name:functional-531072 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-531072 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.193 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 17:53:01.057547   21456 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 17:53:01.059905   21456 out.go:177] 
	W0229 17:53:01.061496   21456 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0229 17:53:01.062766   21456 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-531072 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-531072 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-531072 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (156.614581ms)

                                                
                                                
-- stdout --
	* [functional-531072] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18259
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18259-6428/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6428/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 17:53:01.275858   21511 out.go:291] Setting OutFile to fd 1 ...
	I0229 17:53:01.276098   21511 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 17:53:01.276106   21511 out.go:304] Setting ErrFile to fd 2...
	I0229 17:53:01.276111   21511 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 17:53:01.276384   21511 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6428/.minikube/bin
	I0229 17:53:01.276909   21511 out.go:298] Setting JSON to false
	I0229 17:53:01.277838   21511 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2125,"bootTime":1709227056,"procs":225,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 17:53:01.277907   21511 start.go:139] virtualization: kvm guest
	I0229 17:53:01.279759   21511 out.go:177] * [functional-531072] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I0229 17:53:01.281168   21511 out.go:177]   - MINIKUBE_LOCATION=18259
	I0229 17:53:01.281122   21511 notify.go:220] Checking for updates...
	I0229 17:53:01.282667   21511 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 17:53:01.284262   21511 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18259-6428/kubeconfig
	I0229 17:53:01.285550   21511 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6428/.minikube
	I0229 17:53:01.286889   21511 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 17:53:01.288366   21511 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 17:53:01.290147   21511 config.go:182] Loaded profile config "functional-531072": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 17:53:01.290547   21511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 17:53:01.290587   21511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 17:53:01.308470   21511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40911
	I0229 17:53:01.308947   21511 main.go:141] libmachine: () Calling .GetVersion
	I0229 17:53:01.309574   21511 main.go:141] libmachine: Using API Version  1
	I0229 17:53:01.309602   21511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 17:53:01.309985   21511 main.go:141] libmachine: () Calling .GetMachineName
	I0229 17:53:01.310178   21511 main.go:141] libmachine: (functional-531072) Calling .DriverName
	I0229 17:53:01.310433   21511 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 17:53:01.310768   21511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 17:53:01.310809   21511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 17:53:01.326399   21511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46755
	I0229 17:53:01.326815   21511 main.go:141] libmachine: () Calling .GetVersion
	I0229 17:53:01.327361   21511 main.go:141] libmachine: Using API Version  1
	I0229 17:53:01.327387   21511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 17:53:01.327721   21511 main.go:141] libmachine: () Calling .GetMachineName
	I0229 17:53:01.327928   21511 main.go:141] libmachine: (functional-531072) Calling .DriverName
	I0229 17:53:01.364806   21511 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0229 17:53:01.366193   21511 start.go:299] selected driver: kvm2
	I0229 17:53:01.366211   21511 start.go:903] validating driver "kvm2" against &{Name:functional-531072 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-531072 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.193 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 17:53:01.366361   21511 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 17:53:01.368656   21511 out.go:177] 
	W0229 17:53:01.369962   21511 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0229 17:53:01.371274   21511 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-531072 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-531072 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-rrjkl" [409b156e-9c64-45ac-b143-6b54beaee373] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-rrjkl" [409b156e-9c64-45ac-b143-6b54beaee373] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.004757987s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.193:30165
functional_test.go:1671: http://192.168.39.193:30165: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-rrjkl

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.193:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.193:30165
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.75s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (39.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [8258e9fa-ff01-4f47-8e21-2bd96f8aade6] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.006692151s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-531072 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-531072 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-531072 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-531072 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-531072 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [143cb769-2adc-459a-9fee-85dbed2ad688] Pending
helpers_test.go:344: "sp-pod" [143cb769-2adc-459a-9fee-85dbed2ad688] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [143cb769-2adc-459a-9fee-85dbed2ad688] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.012642295s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-531072 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-531072 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-531072 delete -f testdata/storage-provisioner/pod.yaml: (1.630844329s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-531072 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [264ea4e5-2bac-401d-9449-14bb8c082c87] Pending
helpers_test.go:344: "sp-pod" [264ea4e5-2bac-401d-9449-14bb8c082c87] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [264ea4e5-2bac-401d-9449-14bb8c082c87] Running
E0229 17:53:24.751403   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/addons-848237/client.crt: no such file or directory
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.004124073s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-531072 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (39.44s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 ssh -n functional-531072 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 cp functional-531072:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1696204175/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 ssh -n functional-531072 "sudo cat /home/docker/cp-test.txt"
E0229 17:52:46.348659   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/addons-848237/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 ssh -n functional-531072 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (32.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-531072 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-jv2wf" [f89de1d4-beac-492d-acfb-f37dc76b3408] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-jv2wf" [f89de1d4-beac-492d-acfb-f37dc76b3408] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 31.005362312s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-531072 exec mysql-859648c796-jv2wf -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-531072 exec mysql-859648c796-jv2wf -- mysql -ppassword -e "show databases;": exit status 1 (152.706028ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-531072 exec mysql-859648c796-jv2wf -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (32.33s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/13651/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 ssh "sudo cat /etc/test/nested/copy/13651/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/13651.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 ssh "sudo cat /etc/ssl/certs/13651.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/13651.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 ssh "sudo cat /usr/share/ca-certificates/13651.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/136512.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 ssh "sudo cat /etc/ssl/certs/136512.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/136512.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 ssh "sudo cat /usr/share/ca-certificates/136512.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-531072 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-531072 ssh "sudo systemctl is-active docker": exit status 1 (236.028513ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-531072 ssh "sudo systemctl is-active containerd": exit status 1 (234.12323ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-531072 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
localhost/minikube-local-cache-test:functional-531072
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-531072
docker.io/library/nginx:latest
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-531072 image ls --format short --alsologtostderr:
I0229 17:53:14.927793   22431 out.go:291] Setting OutFile to fd 1 ...
I0229 17:53:14.927893   22431 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 17:53:14.927901   22431 out.go:304] Setting ErrFile to fd 2...
I0229 17:53:14.927905   22431 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 17:53:14.928078   22431 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6428/.minikube/bin
I0229 17:53:14.928625   22431 config.go:182] Loaded profile config "functional-531072": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0229 17:53:14.928711   22431 config.go:182] Loaded profile config "functional-531072": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0229 17:53:14.929065   22431 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0229 17:53:14.929108   22431 main.go:141] libmachine: Launching plugin server for driver kvm2
I0229 17:53:14.943401   22431 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44861
I0229 17:53:14.943898   22431 main.go:141] libmachine: () Calling .GetVersion
I0229 17:53:14.944418   22431 main.go:141] libmachine: Using API Version  1
I0229 17:53:14.944433   22431 main.go:141] libmachine: () Calling .SetConfigRaw
I0229 17:53:14.944746   22431 main.go:141] libmachine: () Calling .GetMachineName
I0229 17:53:14.944953   22431 main.go:141] libmachine: (functional-531072) Calling .GetState
I0229 17:53:14.946642   22431 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0229 17:53:14.946681   22431 main.go:141] libmachine: Launching plugin server for driver kvm2
I0229 17:53:14.961333   22431 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46213
I0229 17:53:14.961764   22431 main.go:141] libmachine: () Calling .GetVersion
I0229 17:53:14.962271   22431 main.go:141] libmachine: Using API Version  1
I0229 17:53:14.962319   22431 main.go:141] libmachine: () Calling .SetConfigRaw
I0229 17:53:14.962679   22431 main.go:141] libmachine: () Calling .GetMachineName
I0229 17:53:14.962887   22431 main.go:141] libmachine: (functional-531072) Calling .DriverName
I0229 17:53:14.963093   22431 ssh_runner.go:195] Run: systemctl --version
I0229 17:53:14.963129   22431 main.go:141] libmachine: (functional-531072) Calling .GetSSHHostname
I0229 17:53:14.965804   22431 main.go:141] libmachine: (functional-531072) DBG | domain functional-531072 has defined MAC address 52:54:00:f6:03:fc in network mk-functional-531072
I0229 17:53:14.966216   22431 main.go:141] libmachine: (functional-531072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:03:fc", ip: ""} in network mk-functional-531072: {Iface:virbr1 ExpiryTime:2024-02-29 18:49:55 +0000 UTC Type:0 Mac:52:54:00:f6:03:fc Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:functional-531072 Clientid:01:52:54:00:f6:03:fc}
I0229 17:53:14.966246   22431 main.go:141] libmachine: (functional-531072) DBG | domain functional-531072 has defined IP address 192.168.39.193 and MAC address 52:54:00:f6:03:fc in network mk-functional-531072
I0229 17:53:14.966438   22431 main.go:141] libmachine: (functional-531072) Calling .GetSSHPort
I0229 17:53:14.966614   22431 main.go:141] libmachine: (functional-531072) Calling .GetSSHKeyPath
I0229 17:53:14.966779   22431 main.go:141] libmachine: (functional-531072) Calling .GetSSHUsername
I0229 17:53:14.966907   22431 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/functional-531072/id_rsa Username:docker}
I0229 17:53:15.092829   22431 ssh_runner.go:195] Run: sudo crictl images --output json
I0229 17:53:15.178244   22431 main.go:141] libmachine: Making call to close driver server
I0229 17:53:15.178262   22431 main.go:141] libmachine: (functional-531072) Calling .Close
I0229 17:53:15.178549   22431 main.go:141] libmachine: Successfully made call to close driver server
I0229 17:53:15.178566   22431 main.go:141] libmachine: Making call to close connection to plugin binary
I0229 17:53:15.178582   22431 main.go:141] libmachine: Making call to close driver server
I0229 17:53:15.178589   22431 main.go:141] libmachine: (functional-531072) Calling .Close
I0229 17:53:15.178835   22431 main.go:141] libmachine: (functional-531072) DBG | Closing plugin on server side
I0229 17:53:15.178838   22431 main.go:141] libmachine: Successfully made call to close driver server
I0229 17:53:15.178868   22431 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 image ls --format table --alsologtostderr
2024/02/29 17:53:20 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-531072 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| localhost/minikube-local-cache-test     | functional-531072  | 6d23121a798d1 | 3.35kB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| docker.io/library/nginx                 | latest             | e4720093a3c13 | 191MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-proxy              | v1.28.4            | 83f6cc407eed8 | 74.7MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| gcr.io/google-containers/addon-resizer  | functional-531072  | ffd4cfbbe753e | 34.1MB |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/kube-apiserver          | v1.28.4            | 7fe0e6f37db33 | 127MB  |
| registry.k8s.io/kube-scheduler          | v1.28.4            | e3db313c6dbc0 | 61.6MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | c7d1297425461 | 65.3MB |
| localhost/my-image                      | functional-531072  | de5fab4df9598 | 1.47MB |
| registry.k8s.io/etcd                    | 3.5.9-0            | 73deb9a3f7025 | 295MB  |
| registry.k8s.io/kube-controller-manager | v1.28.4            | d058aa5ab969c | 123MB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-531072 image ls --format table --alsologtostderr:
I0229 17:53:20.607331   22617 out.go:291] Setting OutFile to fd 1 ...
I0229 17:53:20.607550   22617 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 17:53:20.607560   22617 out.go:304] Setting ErrFile to fd 2...
I0229 17:53:20.607564   22617 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 17:53:20.607763   22617 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6428/.minikube/bin
I0229 17:53:20.608333   22617 config.go:182] Loaded profile config "functional-531072": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0229 17:53:20.608432   22617 config.go:182] Loaded profile config "functional-531072": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0229 17:53:20.608825   22617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0229 17:53:20.608885   22617 main.go:141] libmachine: Launching plugin server for driver kvm2
I0229 17:53:20.623213   22617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45603
I0229 17:53:20.623652   22617 main.go:141] libmachine: () Calling .GetVersion
I0229 17:53:20.624251   22617 main.go:141] libmachine: Using API Version  1
I0229 17:53:20.624274   22617 main.go:141] libmachine: () Calling .SetConfigRaw
I0229 17:53:20.624584   22617 main.go:141] libmachine: () Calling .GetMachineName
I0229 17:53:20.624752   22617 main.go:141] libmachine: (functional-531072) Calling .GetState
I0229 17:53:20.626533   22617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0229 17:53:20.626573   22617 main.go:141] libmachine: Launching plugin server for driver kvm2
I0229 17:53:20.640308   22617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36487
I0229 17:53:20.640755   22617 main.go:141] libmachine: () Calling .GetVersion
I0229 17:53:20.641170   22617 main.go:141] libmachine: Using API Version  1
I0229 17:53:20.641191   22617 main.go:141] libmachine: () Calling .SetConfigRaw
I0229 17:53:20.641564   22617 main.go:141] libmachine: () Calling .GetMachineName
I0229 17:53:20.641730   22617 main.go:141] libmachine: (functional-531072) Calling .DriverName
I0229 17:53:20.641915   22617 ssh_runner.go:195] Run: systemctl --version
I0229 17:53:20.641945   22617 main.go:141] libmachine: (functional-531072) Calling .GetSSHHostname
I0229 17:53:20.644276   22617 main.go:141] libmachine: (functional-531072) DBG | domain functional-531072 has defined MAC address 52:54:00:f6:03:fc in network mk-functional-531072
I0229 17:53:20.644699   22617 main.go:141] libmachine: (functional-531072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:03:fc", ip: ""} in network mk-functional-531072: {Iface:virbr1 ExpiryTime:2024-02-29 18:49:55 +0000 UTC Type:0 Mac:52:54:00:f6:03:fc Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:functional-531072 Clientid:01:52:54:00:f6:03:fc}
I0229 17:53:20.644724   22617 main.go:141] libmachine: (functional-531072) DBG | domain functional-531072 has defined IP address 192.168.39.193 and MAC address 52:54:00:f6:03:fc in network mk-functional-531072
I0229 17:53:20.644811   22617 main.go:141] libmachine: (functional-531072) Calling .GetSSHPort
I0229 17:53:20.644952   22617 main.go:141] libmachine: (functional-531072) Calling .GetSSHKeyPath
I0229 17:53:20.645106   22617 main.go:141] libmachine: (functional-531072) Calling .GetSSHUsername
I0229 17:53:20.645234   22617 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/functional-531072/id_rsa Username:docker}
I0229 17:53:20.730581   22617 ssh_runner.go:195] Run: sudo crictl images --output json
I0229 17:53:20.770564   22617 main.go:141] libmachine: Making call to close driver server
I0229 17:53:20.770578   22617 main.go:141] libmachine: (functional-531072) Calling .Close
I0229 17:53:20.770834   22617 main.go:141] libmachine: (functional-531072) DBG | Closing plugin on server side
I0229 17:53:20.770862   22617 main.go:141] libmachine: Successfully made call to close driver server
I0229 17:53:20.770883   22617 main.go:141] libmachine: Making call to close connection to plugin binary
I0229 17:53:20.770897   22617 main.go:141] libmachine: Making call to close driver server
I0229 17:53:20.770909   22617 main.go:141] libmachine: (functional-531072) Calling .Close
I0229 17:53:20.771210   22617 main.go:141] libmachine: Successfully made call to close driver server
I0229 17:53:20.771242   22617 main.go:141] libmachine: Making call to close connection to plugin binary
I0229 17:53:20.771268   22617 main.go:141] libmachine: (functional-531072) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-531072 image ls --format json --alsologtostderr:
[{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"e4720093a3c1381245b53a5a51b417963b3c4472d3f47fc301930a4f3b17666a","repoDigests":["docker.io/library/nginx@sha256:05aa73005987caaed48ea8213696b0df761ccd600d2c53fc0a1a97a180301d71","docker.io/library/nginx@sha256:c26ae7472d624ba1fafd296e73cecc4f93f853088e6a9c13c0d52f6ca5865107"],"repoTags":["docker.io/library/nginx:latest"],"size":"190865895"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s
-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6d23121a798d16d7a8d2d0f9d280a8780f74540225657dd46e3c174eea53c008","repoDigests":["localhost/minikube-local-cache-test@sha256:e8485687b70c81384cfc570668810b3050be865ce189709f1a82b13349fa5029"],"repoTags":["localhost/minikube-local-cache-test:functional-531072"],"size":"3345"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15","registry
.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"295456551"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"e38c479d22a2893ca43e74dced6525de91e62c5cdbe9a5d03e2d5fce21d5cf2f","repoDigests":["docker.io/library/cd41df2c738dac477545d67d074665ad439973af52eb0ed2e1d5a60215ec653e-tmp@sha256:e6892a8dbbe1f4f6fe8801c9061fe4b3add39766a459ed72d6a876feb7c619d9"],"repoTags":[],"size":"1466018"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:
functional-531072"],"size":"34114467"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"de5fab4df9598061c8a50d43750276dd5e510f5b7deb7a0c3643aa083be5d1e5","repoDigests":["localhost/my-image@sha256:84fad087cba54ed25974b92929b642669777cf40978db4db68e872b0fee9e243"],"repoTags":["localhost/my-image:functional-531072"],"size":"1468600"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675
"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c","registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"123261750"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":["registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"74749335"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba","registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff
002cccdeb63678fad08ec0fba32298ffe32"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"61551410"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":["registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499","registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"127226832"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["reg
istry.k8s.io/pause:latest"],"size":"247077"},{"id":"c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"65258016"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/
pause:3.1"],"size":"746911"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-531072 image ls --format json --alsologtostderr:
I0229 17:53:20.378455   22581 out.go:291] Setting OutFile to fd 1 ...
I0229 17:53:20.378803   22581 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 17:53:20.378818   22581 out.go:304] Setting ErrFile to fd 2...
I0229 17:53:20.378825   22581 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 17:53:20.379313   22581 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6428/.minikube/bin
I0229 17:53:20.380373   22581 config.go:182] Loaded profile config "functional-531072": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0229 17:53:20.380466   22581 config.go:182] Loaded profile config "functional-531072": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0229 17:53:20.380817   22581 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0229 17:53:20.380854   22581 main.go:141] libmachine: Launching plugin server for driver kvm2
I0229 17:53:20.396544   22581 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43647
I0229 17:53:20.396900   22581 main.go:141] libmachine: () Calling .GetVersion
I0229 17:53:20.397385   22581 main.go:141] libmachine: Using API Version  1
I0229 17:53:20.397408   22581 main.go:141] libmachine: () Calling .SetConfigRaw
I0229 17:53:20.397715   22581 main.go:141] libmachine: () Calling .GetMachineName
I0229 17:53:20.397887   22581 main.go:141] libmachine: (functional-531072) Calling .GetState
I0229 17:53:20.399669   22581 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0229 17:53:20.399708   22581 main.go:141] libmachine: Launching plugin server for driver kvm2
I0229 17:53:20.413996   22581 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38249
I0229 17:53:20.414398   22581 main.go:141] libmachine: () Calling .GetVersion
I0229 17:53:20.415043   22581 main.go:141] libmachine: Using API Version  1
I0229 17:53:20.415065   22581 main.go:141] libmachine: () Calling .SetConfigRaw
I0229 17:53:20.415392   22581 main.go:141] libmachine: () Calling .GetMachineName
I0229 17:53:20.415537   22581 main.go:141] libmachine: (functional-531072) Calling .DriverName
I0229 17:53:20.415703   22581 ssh_runner.go:195] Run: systemctl --version
I0229 17:53:20.415723   22581 main.go:141] libmachine: (functional-531072) Calling .GetSSHHostname
I0229 17:53:20.418019   22581 main.go:141] libmachine: (functional-531072) DBG | domain functional-531072 has defined MAC address 52:54:00:f6:03:fc in network mk-functional-531072
I0229 17:53:20.418330   22581 main.go:141] libmachine: (functional-531072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:03:fc", ip: ""} in network mk-functional-531072: {Iface:virbr1 ExpiryTime:2024-02-29 18:49:55 +0000 UTC Type:0 Mac:52:54:00:f6:03:fc Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:functional-531072 Clientid:01:52:54:00:f6:03:fc}
I0229 17:53:20.418359   22581 main.go:141] libmachine: (functional-531072) DBG | domain functional-531072 has defined IP address 192.168.39.193 and MAC address 52:54:00:f6:03:fc in network mk-functional-531072
I0229 17:53:20.418438   22581 main.go:141] libmachine: (functional-531072) Calling .GetSSHPort
I0229 17:53:20.418594   22581 main.go:141] libmachine: (functional-531072) Calling .GetSSHKeyPath
I0229 17:53:20.418738   22581 main.go:141] libmachine: (functional-531072) Calling .GetSSHUsername
I0229 17:53:20.418864   22581 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/functional-531072/id_rsa Username:docker}
I0229 17:53:20.498162   22581 ssh_runner.go:195] Run: sudo crictl images --output json
I0229 17:53:20.544367   22581 main.go:141] libmachine: Making call to close driver server
I0229 17:53:20.544381   22581 main.go:141] libmachine: (functional-531072) Calling .Close
I0229 17:53:20.544680   22581 main.go:141] libmachine: Successfully made call to close driver server
I0229 17:53:20.544712   22581 main.go:141] libmachine: Making call to close connection to plugin binary
I0229 17:53:20.544727   22581 main.go:141] libmachine: Making call to close driver server
I0229 17:53:20.544735   22581 main.go:141] libmachine: (functional-531072) Calling .Close
I0229 17:53:20.544752   22581 main.go:141] libmachine: (functional-531072) DBG | Closing plugin on server side
I0229 17:53:20.544947   22581 main.go:141] libmachine: (functional-531072) DBG | Closing plugin on server side
I0229 17:53:20.544981   22581 main.go:141] libmachine: Successfully made call to close driver server
I0229 17:53:20.544997   22581 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-531072 image ls --format yaml --alsologtostderr:
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "127226832"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests:
- registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "74749335"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 6d23121a798d16d7a8d2d0f9d280a8780f74540225657dd46e3c174eea53c008
repoDigests:
- localhost/minikube-local-cache-test@sha256:e8485687b70c81384cfc570668810b3050be865ce189709f1a82b13349fa5029
repoTags:
- localhost/minikube-local-cache-test:functional-531072
size: "3345"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
- registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "123261750"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags: []
size: "249229937"
- id: e4720093a3c1381245b53a5a51b417963b3c4472d3f47fc301930a4f3b17666a
repoDigests:
- docker.io/library/nginx@sha256:05aa73005987caaed48ea8213696b0df761ccd600d2c53fc0a1a97a180301d71
- docker.io/library/nginx@sha256:c26ae7472d624ba1fafd296e73cecc4f93f853088e6a9c13c0d52f6ca5865107
repoTags:
- docker.io/library/nginx:latest
size: "190865895"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "295456551"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "65258016"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-531072
size: "34114467"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
- registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "61551410"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-531072 image ls --format yaml --alsologtostderr:
I0229 17:53:15.236882   22455 out.go:291] Setting OutFile to fd 1 ...
I0229 17:53:15.237006   22455 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 17:53:15.237016   22455 out.go:304] Setting ErrFile to fd 2...
I0229 17:53:15.237022   22455 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 17:53:15.237218   22455 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6428/.minikube/bin
I0229 17:53:15.237759   22455 config.go:182] Loaded profile config "functional-531072": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0229 17:53:15.237859   22455 config.go:182] Loaded profile config "functional-531072": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0229 17:53:15.238201   22455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0229 17:53:15.238246   22455 main.go:141] libmachine: Launching plugin server for driver kvm2
I0229 17:53:15.253210   22455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33973
I0229 17:53:15.253634   22455 main.go:141] libmachine: () Calling .GetVersion
I0229 17:53:15.254152   22455 main.go:141] libmachine: Using API Version  1
I0229 17:53:15.254178   22455 main.go:141] libmachine: () Calling .SetConfigRaw
I0229 17:53:15.254594   22455 main.go:141] libmachine: () Calling .GetMachineName
I0229 17:53:15.254832   22455 main.go:141] libmachine: (functional-531072) Calling .GetState
I0229 17:53:15.256703   22455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0229 17:53:15.256751   22455 main.go:141] libmachine: Launching plugin server for driver kvm2
I0229 17:53:15.271221   22455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39349
I0229 17:53:15.271616   22455 main.go:141] libmachine: () Calling .GetVersion
I0229 17:53:15.272065   22455 main.go:141] libmachine: Using API Version  1
I0229 17:53:15.272095   22455 main.go:141] libmachine: () Calling .SetConfigRaw
I0229 17:53:15.272488   22455 main.go:141] libmachine: () Calling .GetMachineName
I0229 17:53:15.272696   22455 main.go:141] libmachine: (functional-531072) Calling .DriverName
I0229 17:53:15.272908   22455 ssh_runner.go:195] Run: systemctl --version
I0229 17:53:15.272940   22455 main.go:141] libmachine: (functional-531072) Calling .GetSSHHostname
I0229 17:53:15.275971   22455 main.go:141] libmachine: (functional-531072) DBG | domain functional-531072 has defined MAC address 52:54:00:f6:03:fc in network mk-functional-531072
I0229 17:53:15.276454   22455 main.go:141] libmachine: (functional-531072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:03:fc", ip: ""} in network mk-functional-531072: {Iface:virbr1 ExpiryTime:2024-02-29 18:49:55 +0000 UTC Type:0 Mac:52:54:00:f6:03:fc Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:functional-531072 Clientid:01:52:54:00:f6:03:fc}
I0229 17:53:15.276487   22455 main.go:141] libmachine: (functional-531072) DBG | domain functional-531072 has defined IP address 192.168.39.193 and MAC address 52:54:00:f6:03:fc in network mk-functional-531072
I0229 17:53:15.276593   22455 main.go:141] libmachine: (functional-531072) Calling .GetSSHPort
I0229 17:53:15.276800   22455 main.go:141] libmachine: (functional-531072) Calling .GetSSHKeyPath
I0229 17:53:15.277047   22455 main.go:141] libmachine: (functional-531072) Calling .GetSSHUsername
I0229 17:53:15.277187   22455 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/functional-531072/id_rsa Username:docker}
I0229 17:53:15.360284   22455 ssh_runner.go:195] Run: sudo crictl images --output json
I0229 17:53:15.432210   22455 main.go:141] libmachine: Making call to close driver server
I0229 17:53:15.432227   22455 main.go:141] libmachine: (functional-531072) Calling .Close
I0229 17:53:15.432502   22455 main.go:141] libmachine: Successfully made call to close driver server
I0229 17:53:15.432521   22455 main.go:141] libmachine: Making call to close connection to plugin binary
I0229 17:53:15.432534   22455 main.go:141] libmachine: Making call to close driver server
I0229 17:53:15.432544   22455 main.go:141] libmachine: (functional-531072) Calling .Close
I0229 17:53:15.432821   22455 main.go:141] libmachine: Successfully made call to close driver server
I0229 17:53:15.432876   22455 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-531072 ssh pgrep buildkitd: exit status 1 (212.758141ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 image build -t localhost/my-image:functional-531072 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-531072 image build -t localhost/my-image:functional-531072 testdata/build --alsologtostderr: (4.443658464s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-531072 image build -t localhost/my-image:functional-531072 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> e38c479d22a
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-531072
--> de5fab4df95
Successfully tagged localhost/my-image:functional-531072
de5fab4df9598061c8a50d43750276dd5e510f5b7deb7a0c3643aa083be5d1e5
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-531072 image build -t localhost/my-image:functional-531072 testdata/build --alsologtostderr:
I0229 17:53:15.703540   22522 out.go:291] Setting OutFile to fd 1 ...
I0229 17:53:15.703676   22522 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 17:53:15.703685   22522 out.go:304] Setting ErrFile to fd 2...
I0229 17:53:15.703689   22522 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 17:53:15.703847   22522 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6428/.minikube/bin
I0229 17:53:15.704386   22522 config.go:182] Loaded profile config "functional-531072": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0229 17:53:15.704932   22522 config.go:182] Loaded profile config "functional-531072": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0229 17:53:15.705331   22522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0229 17:53:15.705380   22522 main.go:141] libmachine: Launching plugin server for driver kvm2
I0229 17:53:15.720143   22522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32935
I0229 17:53:15.720636   22522 main.go:141] libmachine: () Calling .GetVersion
I0229 17:53:15.721154   22522 main.go:141] libmachine: Using API Version  1
I0229 17:53:15.721177   22522 main.go:141] libmachine: () Calling .SetConfigRaw
I0229 17:53:15.721517   22522 main.go:141] libmachine: () Calling .GetMachineName
I0229 17:53:15.721695   22522 main.go:141] libmachine: (functional-531072) Calling .GetState
I0229 17:53:15.723862   22522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0229 17:53:15.723906   22522 main.go:141] libmachine: Launching plugin server for driver kvm2
I0229 17:53:15.737936   22522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33373
I0229 17:53:15.738318   22522 main.go:141] libmachine: () Calling .GetVersion
I0229 17:53:15.738727   22522 main.go:141] libmachine: Using API Version  1
I0229 17:53:15.738746   22522 main.go:141] libmachine: () Calling .SetConfigRaw
I0229 17:53:15.739033   22522 main.go:141] libmachine: () Calling .GetMachineName
I0229 17:53:15.739221   22522 main.go:141] libmachine: (functional-531072) Calling .DriverName
I0229 17:53:15.739411   22522 ssh_runner.go:195] Run: systemctl --version
I0229 17:53:15.739431   22522 main.go:141] libmachine: (functional-531072) Calling .GetSSHHostname
I0229 17:53:15.742109   22522 main.go:141] libmachine: (functional-531072) DBG | domain functional-531072 has defined MAC address 52:54:00:f6:03:fc in network mk-functional-531072
I0229 17:53:15.742520   22522 main.go:141] libmachine: (functional-531072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:03:fc", ip: ""} in network mk-functional-531072: {Iface:virbr1 ExpiryTime:2024-02-29 18:49:55 +0000 UTC Type:0 Mac:52:54:00:f6:03:fc Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:functional-531072 Clientid:01:52:54:00:f6:03:fc}
I0229 17:53:15.742550   22522 main.go:141] libmachine: (functional-531072) DBG | domain functional-531072 has defined IP address 192.168.39.193 and MAC address 52:54:00:f6:03:fc in network mk-functional-531072
I0229 17:53:15.742686   22522 main.go:141] libmachine: (functional-531072) Calling .GetSSHPort
I0229 17:53:15.742838   22522 main.go:141] libmachine: (functional-531072) Calling .GetSSHKeyPath
I0229 17:53:15.742991   22522 main.go:141] libmachine: (functional-531072) Calling .GetSSHUsername
I0229 17:53:15.743132   22522 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/functional-531072/id_rsa Username:docker}
I0229 17:53:15.822162   22522 build_images.go:151] Building image from path: /tmp/build.2770500427.tar
I0229 17:53:15.822214   22522 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0229 17:53:15.834997   22522 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2770500427.tar
I0229 17:53:15.840285   22522 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2770500427.tar: stat -c "%s %y" /var/lib/minikube/build/build.2770500427.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2770500427.tar': No such file or directory
I0229 17:53:15.840325   22522 ssh_runner.go:362] scp /tmp/build.2770500427.tar --> /var/lib/minikube/build/build.2770500427.tar (3072 bytes)
I0229 17:53:15.869797   22522 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2770500427
I0229 17:53:15.880659   22522 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2770500427 -xf /var/lib/minikube/build/build.2770500427.tar
I0229 17:53:15.891260   22522 crio.go:297] Building image: /var/lib/minikube/build/build.2770500427
I0229 17:53:15.891310   22522 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-531072 /var/lib/minikube/build/build.2770500427 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0229 17:53:20.063165   22522 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-531072 /var/lib/minikube/build/build.2770500427 --cgroup-manager=cgroupfs: (4.171824698s)
I0229 17:53:20.063241   22522 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2770500427
I0229 17:53:20.077220   22522 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2770500427.tar
I0229 17:53:20.089144   22522 build_images.go:207] Built localhost/my-image:functional-531072 from /tmp/build.2770500427.tar
I0229 17:53:20.089169   22522 build_images.go:123] succeeded building to: functional-531072
I0229 17:53:20.089173   22522 build_images.go:124] failed building to: 
I0229 17:53:20.089227   22522 main.go:141] libmachine: Making call to close driver server
I0229 17:53:20.089248   22522 main.go:141] libmachine: (functional-531072) Calling .Close
I0229 17:53:20.089544   22522 main.go:141] libmachine: Successfully made call to close driver server
I0229 17:53:20.089571   22522 main.go:141] libmachine: Making call to close connection to plugin binary
I0229 17:53:20.089579   22522 main.go:141] libmachine: Making call to close driver server
I0229 17:53:20.089586   22522 main.go:141] libmachine: (functional-531072) Calling .Close
I0229 17:53:20.089594   22522 main.go:141] libmachine: (functional-531072) DBG | Closing plugin on server side
I0229 17:53:20.089807   22522 main.go:141] libmachine: Successfully made call to close driver server
I0229 17:53:20.089820   22522 main.go:141] libmachine: Making call to close connection to plugin binary
I0229 17:53:20.089849   22522 main.go:141] libmachine: (functional-531072) DBG | Closing plugin on server side
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.15978755s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-531072
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.18s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (12.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-531072 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-531072 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-8zkx6" [d42097df-58fb-4322-8b98-5531030b720d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-8zkx6" [d42097df-58fb-4322-8b98-5531030b720d] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 12.005190508s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (12.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 image load --daemon gcr.io/google-containers/addon-resizer:functional-531072 --alsologtostderr
E0229 17:52:48.909393   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/addons-848237/client.crt: no such file or directory
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-531072 image load --daemon gcr.io/google-containers/addon-resizer:functional-531072 --alsologtostderr: (4.338883828s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 image load --daemon gcr.io/google-containers/addon-resizer:functional-531072 --alsologtostderr
E0229 17:52:54.029950   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/addons-848237/client.crt: no such file or directory
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-531072 image load --daemon gcr.io/google-containers/addon-resizer:functional-531072 --alsologtostderr: (2.466452995s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (10.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.99949456s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-531072
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 image load --daemon gcr.io/google-containers/addon-resizer:functional-531072 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-531072 image load --daemon gcr.io/google-containers/addon-resizer:functional-531072 --alsologtostderr: (7.829806162s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (10.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 service list -o json
functional_test.go:1490: Took "398.224345ms" to run "out/minikube-linux-amd64 -p functional-531072 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.193:30782
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "316.290867ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "59.364449ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "346.159079ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "54.200881ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.193:30782
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-531072 /tmp/TestFunctionalparallelMountCmdany-port210858053/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1709229180758223510" to /tmp/TestFunctionalparallelMountCmdany-port210858053/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1709229180758223510" to /tmp/TestFunctionalparallelMountCmdany-port210858053/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1709229180758223510" to /tmp/TestFunctionalparallelMountCmdany-port210858053/001/test-1709229180758223510
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-531072 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (267.562218ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Feb 29 17:53 created-by-test
-rw-r--r-- 1 docker docker 24 Feb 29 17:53 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Feb 29 17:53 test-1709229180758223510
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 ssh cat /mount-9p/test-1709229180758223510
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-531072 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [1b45f127-7b72-4d91-8feb-229d08e361dd] Pending
helpers_test.go:344: "busybox-mount" [1b45f127-7b72-4d91-8feb-229d08e361dd] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
E0229 17:53:04.271088   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/addons-848237/client.crt: no such file or directory
helpers_test.go:344: "busybox-mount" [1b45f127-7b72-4d91-8feb-229d08e361dd] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [1b45f127-7b72-4d91-8feb-229d08e361dd] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.139469684s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-531072 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-531072 /tmp/TestFunctionalparallelMountCmdany-port210858053/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 image save gcr.io/google-containers/addon-resizer:functional-531072 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-531072 image save gcr.io/google-containers/addon-resizer:functional-531072 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.346964147s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 image rm gcr.io/google-containers/addon-resizer:functional-531072 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-531072 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.334405905s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-531072
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 image save --daemon gcr.io/google-containers/addon-resizer:functional-531072 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-531072 image save --daemon gcr.io/google-containers/addon-resizer:functional-531072 --alsologtostderr: (1.516291372s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-531072
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-531072 /tmp/TestFunctionalparallelMountCmdspecific-port761888332/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-531072 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (278.282906ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-531072 /tmp/TestFunctionalparallelMountCmdspecific-port761888332/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-531072 ssh "sudo umount -f /mount-9p": exit status 1 (258.116138ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-531072 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-531072 /tmp/TestFunctionalparallelMountCmdspecific-port761888332/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.97s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-531072 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3984276182/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-531072 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3984276182/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-531072 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3984276182/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-531072 ssh "findmnt -T" /mount1: exit status 1 (286.699289ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-531072 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-531072 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-531072 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3984276182/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-531072 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3984276182/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-531072 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3984276182/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.45s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-531072
--- PASS: TestFunctional/delete_addon-resizer_images (0.06s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-531072
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-531072
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestJSONOutput/start/Command (98.36s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-550631 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0229 18:02:43.786138   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/addons-848237/client.crt: no such file or directory
E0229 18:02:46.665141   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/functional-531072/client.crt: no such file or directory
E0229 18:03:14.351666   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/functional-531072/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-550631 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m38.357675087s)
--- PASS: TestJSONOutput/start/Command (98.36s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.8s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-550631 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.80s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.7s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-550631 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.70s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.11s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-550631 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-550631 --output=json --user=testUser: (7.110696652s)
--- PASS: TestJSONOutput/stop/Command (7.11s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-452478 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-452478 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (72.72106ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1a9a8177-8cf6-4b4f-9755-666bd2a24c96","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-452478] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"594f171b-a93d-485b-8d3d-4a0074e2070c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18259"}}
	{"specversion":"1.0","id":"5ab490bd-3fcd-4061-b9e6-8263038e4a74","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5cf0ef16-7bbe-4b0a-9c0d-cce0ba9839f0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18259-6428/kubeconfig"}}
	{"specversion":"1.0","id":"6b5ed35d-4435-4c94-81f1-0f93efe6b2bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6428/.minikube"}}
	{"specversion":"1.0","id":"b4232758-09d8-4294-b7a3-f6f88b785192","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"021f2df6-8025-421f-8626-607a63076244","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"68900d05-da8a-4232-9026-10e24edae7a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-452478" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-452478
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (90.82s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-030247 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-030247 --driver=kvm2  --container-runtime=crio: (43.018454938s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-032905 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-032905 --driver=kvm2  --container-runtime=crio: (45.235796024s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-030247
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-032905
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-032905" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-032905
helpers_test.go:175: Cleaning up "first-030247" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-030247
--- PASS: TestMinikubeProfile (90.82s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (27.99s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-055490 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-055490 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.994080308s)
--- PASS: TestMountStart/serial/StartWithMountFirst (27.99s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-055490 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-055490 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (28.22s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-070334 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-070334 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.221392489s)
--- PASS: TestMountStart/serial/StartWithMountSecond (28.22s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-070334 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-070334 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.66s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-055490 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-070334 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-070334 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-070334
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-070334: (1.212112202s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.01s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-070334
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-070334: (22.012299255s)
--- PASS: TestMountStart/serial/RestartStopped (23.01s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-070334 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-070334 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (181.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-051105 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0229 18:07:43.785624   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/addons-848237/client.crt: no such file or directory
E0229 18:07:46.662914   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/functional-531072/client.crt: no such file or directory
E0229 18:09:06.836597   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/addons-848237/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-amd64 start -p multinode-051105 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m1.021670803s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051105 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (181.44s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-051105 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-051105 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-051105 -- rollout status deployment/busybox: (4.726254995s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-051105 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-051105 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-051105 -- exec busybox-5b5d89c9d6-dl8t4 -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-051105 -- exec busybox-5b5d89c9d6-m9jth -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-051105 -- exec busybox-5b5d89c9d6-dl8t4 -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-051105 -- exec busybox-5b5d89c9d6-m9jth -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-051105 -- exec busybox-5b5d89c9d6-dl8t4 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-051105 -- exec busybox-5b5d89c9d6-m9jth -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.42s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-051105 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-051105 -- exec busybox-5b5d89c9d6-dl8t4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-051105 -- exec busybox-5b5d89c9d6-dl8t4 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-051105 -- exec busybox-5b5d89c9d6-m9jth -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-051105 -- exec busybox-5b5d89c9d6-m9jth -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.84s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (39.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-051105 -v 3 --alsologtostderr
multinode_test.go:111: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-051105 -v 3 --alsologtostderr: (39.101928181s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051105 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (39.69s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-051105 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051105 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051105 cp testdata/cp-test.txt multinode-051105:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051105 ssh -n multinode-051105 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051105 cp multinode-051105:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2896101559/001/cp-test_multinode-051105.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051105 ssh -n multinode-051105 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051105 cp multinode-051105:/home/docker/cp-test.txt multinode-051105-m02:/home/docker/cp-test_multinode-051105_multinode-051105-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051105 ssh -n multinode-051105 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051105 ssh -n multinode-051105-m02 "sudo cat /home/docker/cp-test_multinode-051105_multinode-051105-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051105 cp multinode-051105:/home/docker/cp-test.txt multinode-051105-m03:/home/docker/cp-test_multinode-051105_multinode-051105-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051105 ssh -n multinode-051105 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051105 ssh -n multinode-051105-m03 "sudo cat /home/docker/cp-test_multinode-051105_multinode-051105-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051105 cp testdata/cp-test.txt multinode-051105-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051105 ssh -n multinode-051105-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051105 cp multinode-051105-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2896101559/001/cp-test_multinode-051105-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051105 ssh -n multinode-051105-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051105 cp multinode-051105-m02:/home/docker/cp-test.txt multinode-051105:/home/docker/cp-test_multinode-051105-m02_multinode-051105.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051105 ssh -n multinode-051105-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051105 ssh -n multinode-051105 "sudo cat /home/docker/cp-test_multinode-051105-m02_multinode-051105.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051105 cp multinode-051105-m02:/home/docker/cp-test.txt multinode-051105-m03:/home/docker/cp-test_multinode-051105-m02_multinode-051105-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051105 ssh -n multinode-051105-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051105 ssh -n multinode-051105-m03 "sudo cat /home/docker/cp-test_multinode-051105-m02_multinode-051105-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051105 cp testdata/cp-test.txt multinode-051105-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051105 ssh -n multinode-051105-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051105 cp multinode-051105-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2896101559/001/cp-test_multinode-051105-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051105 ssh -n multinode-051105-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051105 cp multinode-051105-m03:/home/docker/cp-test.txt multinode-051105:/home/docker/cp-test_multinode-051105-m03_multinode-051105.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051105 ssh -n multinode-051105-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051105 ssh -n multinode-051105 "sudo cat /home/docker/cp-test_multinode-051105-m03_multinode-051105.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051105 cp multinode-051105-m03:/home/docker/cp-test.txt multinode-051105-m02:/home/docker/cp-test_multinode-051105-m03_multinode-051105-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051105 ssh -n multinode-051105-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051105 ssh -n multinode-051105-m02 "sudo cat /home/docker/cp-test_multinode-051105-m03_multinode-051105-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.48s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051105 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-amd64 -p multinode-051105 node stop m03: (2.090431333s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051105 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-051105 status: exit status 7 (437.218178ms)

                                                
                                                
-- stdout --
	multinode-051105
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-051105-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-051105-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051105 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-051105 status --alsologtostderr: exit status 7 (440.632156ms)

                                                
                                                
-- stdout --
	multinode-051105
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-051105-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-051105-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 18:10:18.483755   29935 out.go:291] Setting OutFile to fd 1 ...
	I0229 18:10:18.484008   29935 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:10:18.484018   29935 out.go:304] Setting ErrFile to fd 2...
	I0229 18:10:18.484025   29935 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:10:18.484220   29935 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6428/.minikube/bin
	I0229 18:10:18.484406   29935 out.go:298] Setting JSON to false
	I0229 18:10:18.484438   29935 mustload.go:65] Loading cluster: multinode-051105
	I0229 18:10:18.484542   29935 notify.go:220] Checking for updates...
	I0229 18:10:18.484862   29935 config.go:182] Loaded profile config "multinode-051105": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 18:10:18.484880   29935 status.go:255] checking status of multinode-051105 ...
	I0229 18:10:18.485311   29935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 18:10:18.485374   29935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:10:18.501087   29935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33115
	I0229 18:10:18.501485   29935 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:10:18.501950   29935 main.go:141] libmachine: Using API Version  1
	I0229 18:10:18.501970   29935 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:10:18.502299   29935 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:10:18.502512   29935 main.go:141] libmachine: (multinode-051105) Calling .GetState
	I0229 18:10:18.503984   29935 status.go:330] multinode-051105 host status = "Running" (err=<nil>)
	I0229 18:10:18.504008   29935 host.go:66] Checking if "multinode-051105" exists ...
	I0229 18:10:18.504303   29935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 18:10:18.504347   29935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:10:18.519078   29935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46145
	I0229 18:10:18.519445   29935 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:10:18.519846   29935 main.go:141] libmachine: Using API Version  1
	I0229 18:10:18.519879   29935 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:10:18.520222   29935 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:10:18.520408   29935 main.go:141] libmachine: (multinode-051105) Calling .GetIP
	I0229 18:10:18.523160   29935 main.go:141] libmachine: (multinode-051105) DBG | domain multinode-051105 has defined MAC address 52:54:00:58:1f:e6 in network mk-multinode-051105
	I0229 18:10:18.523559   29935 main.go:141] libmachine: (multinode-051105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:1f:e6", ip: ""} in network mk-multinode-051105: {Iface:virbr1 ExpiryTime:2024-02-29 19:06:35 +0000 UTC Type:0 Mac:52:54:00:58:1f:e6 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:multinode-051105 Clientid:01:52:54:00:58:1f:e6}
	I0229 18:10:18.523600   29935 main.go:141] libmachine: (multinode-051105) DBG | domain multinode-051105 has defined IP address 192.168.39.200 and MAC address 52:54:00:58:1f:e6 in network mk-multinode-051105
	I0229 18:10:18.523672   29935 host.go:66] Checking if "multinode-051105" exists ...
	I0229 18:10:18.524026   29935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 18:10:18.524068   29935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:10:18.539819   29935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35845
	I0229 18:10:18.540179   29935 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:10:18.540577   29935 main.go:141] libmachine: Using API Version  1
	I0229 18:10:18.540598   29935 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:10:18.540877   29935 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:10:18.541054   29935 main.go:141] libmachine: (multinode-051105) Calling .DriverName
	I0229 18:10:18.541204   29935 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0229 18:10:18.541227   29935 main.go:141] libmachine: (multinode-051105) Calling .GetSSHHostname
	I0229 18:10:18.543758   29935 main.go:141] libmachine: (multinode-051105) DBG | domain multinode-051105 has defined MAC address 52:54:00:58:1f:e6 in network mk-multinode-051105
	I0229 18:10:18.544118   29935 main.go:141] libmachine: (multinode-051105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:1f:e6", ip: ""} in network mk-multinode-051105: {Iface:virbr1 ExpiryTime:2024-02-29 19:06:35 +0000 UTC Type:0 Mac:52:54:00:58:1f:e6 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:multinode-051105 Clientid:01:52:54:00:58:1f:e6}
	I0229 18:10:18.544149   29935 main.go:141] libmachine: (multinode-051105) DBG | domain multinode-051105 has defined IP address 192.168.39.200 and MAC address 52:54:00:58:1f:e6 in network mk-multinode-051105
	I0229 18:10:18.544260   29935 main.go:141] libmachine: (multinode-051105) Calling .GetSSHPort
	I0229 18:10:18.544429   29935 main.go:141] libmachine: (multinode-051105) Calling .GetSSHKeyPath
	I0229 18:10:18.544567   29935 main.go:141] libmachine: (multinode-051105) Calling .GetSSHUsername
	I0229 18:10:18.544672   29935 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/multinode-051105/id_rsa Username:docker}
	I0229 18:10:18.627730   29935 ssh_runner.go:195] Run: systemctl --version
	I0229 18:10:18.634860   29935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 18:10:18.649501   29935 kubeconfig.go:92] found "multinode-051105" server: "https://192.168.39.200:8443"
	I0229 18:10:18.649525   29935 api_server.go:166] Checking apiserver status ...
	I0229 18:10:18.649555   29935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:10:18.666309   29935 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1106/cgroup
	W0229 18:10:18.676958   29935 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1106/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:10:18.677011   29935 ssh_runner.go:195] Run: ls
	I0229 18:10:18.682509   29935 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I0229 18:10:18.686815   29935 api_server.go:279] https://192.168.39.200:8443/healthz returned 200:
	ok
	I0229 18:10:18.686839   29935 status.go:421] multinode-051105 apiserver status = Running (err=<nil>)
	I0229 18:10:18.686848   29935 status.go:257] multinode-051105 status: &{Name:multinode-051105 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0229 18:10:18.686861   29935 status.go:255] checking status of multinode-051105-m02 ...
	I0229 18:10:18.687145   29935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 18:10:18.687180   29935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:10:18.702116   29935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35483
	I0229 18:10:18.702523   29935 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:10:18.702980   29935 main.go:141] libmachine: Using API Version  1
	I0229 18:10:18.703003   29935 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:10:18.703319   29935 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:10:18.703478   29935 main.go:141] libmachine: (multinode-051105-m02) Calling .GetState
	I0229 18:10:18.704843   29935 status.go:330] multinode-051105-m02 host status = "Running" (err=<nil>)
	I0229 18:10:18.704860   29935 host.go:66] Checking if "multinode-051105-m02" exists ...
	I0229 18:10:18.705154   29935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 18:10:18.705186   29935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:10:18.719438   29935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45077
	I0229 18:10:18.719823   29935 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:10:18.720214   29935 main.go:141] libmachine: Using API Version  1
	I0229 18:10:18.720257   29935 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:10:18.720574   29935 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:10:18.720727   29935 main.go:141] libmachine: (multinode-051105-m02) Calling .GetIP
	I0229 18:10:18.723406   29935 main.go:141] libmachine: (multinode-051105-m02) DBG | domain multinode-051105-m02 has defined MAC address 52:54:00:b7:b8:d5 in network mk-multinode-051105
	I0229 18:10:18.723779   29935 main.go:141] libmachine: (multinode-051105-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:b8:d5", ip: ""} in network mk-multinode-051105: {Iface:virbr1 ExpiryTime:2024-02-29 19:07:39 +0000 UTC Type:0 Mac:52:54:00:b7:b8:d5 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:multinode-051105-m02 Clientid:01:52:54:00:b7:b8:d5}
	I0229 18:10:18.723797   29935 main.go:141] libmachine: (multinode-051105-m02) DBG | domain multinode-051105-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:b7:b8:d5 in network mk-multinode-051105
	I0229 18:10:18.723954   29935 host.go:66] Checking if "multinode-051105-m02" exists ...
	I0229 18:10:18.724242   29935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 18:10:18.724273   29935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:10:18.738943   29935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46151
	I0229 18:10:18.739466   29935 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:10:18.739957   29935 main.go:141] libmachine: Using API Version  1
	I0229 18:10:18.739982   29935 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:10:18.740260   29935 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:10:18.740450   29935 main.go:141] libmachine: (multinode-051105-m02) Calling .DriverName
	I0229 18:10:18.740629   29935 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0229 18:10:18.740647   29935 main.go:141] libmachine: (multinode-051105-m02) Calling .GetSSHHostname
	I0229 18:10:18.743353   29935 main.go:141] libmachine: (multinode-051105-m02) DBG | domain multinode-051105-m02 has defined MAC address 52:54:00:b7:b8:d5 in network mk-multinode-051105
	I0229 18:10:18.743766   29935 main.go:141] libmachine: (multinode-051105-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:b8:d5", ip: ""} in network mk-multinode-051105: {Iface:virbr1 ExpiryTime:2024-02-29 19:07:39 +0000 UTC Type:0 Mac:52:54:00:b7:b8:d5 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:multinode-051105-m02 Clientid:01:52:54:00:b7:b8:d5}
	I0229 18:10:18.743792   29935 main.go:141] libmachine: (multinode-051105-m02) DBG | domain multinode-051105-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:b7:b8:d5 in network mk-multinode-051105
	I0229 18:10:18.743934   29935 main.go:141] libmachine: (multinode-051105-m02) Calling .GetSSHPort
	I0229 18:10:18.744104   29935 main.go:141] libmachine: (multinode-051105-m02) Calling .GetSSHKeyPath
	I0229 18:10:18.744259   29935 main.go:141] libmachine: (multinode-051105-m02) Calling .GetSSHUsername
	I0229 18:10:18.744391   29935 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6428/.minikube/machines/multinode-051105-m02/id_rsa Username:docker}
	I0229 18:10:18.829875   29935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 18:10:18.847111   29935 status.go:257] multinode-051105-m02 status: &{Name:multinode-051105-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0229 18:10:18.847141   29935 status.go:255] checking status of multinode-051105-m03 ...
	I0229 18:10:18.847427   29935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 18:10:18.847468   29935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:10:18.863544   29935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37483
	I0229 18:10:18.864031   29935 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:10:18.864543   29935 main.go:141] libmachine: Using API Version  1
	I0229 18:10:18.864568   29935 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:10:18.864907   29935 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:10:18.865120   29935 main.go:141] libmachine: (multinode-051105-m03) Calling .GetState
	I0229 18:10:18.866739   29935 status.go:330] multinode-051105-m03 host status = "Stopped" (err=<nil>)
	I0229 18:10:18.866751   29935 status.go:343] host is not running, skipping remaining checks
	I0229 18:10:18.866758   29935 status.go:257] multinode-051105-m03 status: &{Name:multinode-051105-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.97s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (28.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051105 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-051105 node start m03 --alsologtostderr: (28.262344932s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051105 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (28.89s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051105 node delete m03
multinode_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051105 status --alsologtostderr
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.53s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (447.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-051105 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0229 18:25:46.837666   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/addons-848237/client.crt: no such file or directory
E0229 18:27:43.785672   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/addons-848237/client.crt: no such file or directory
E0229 18:27:46.665576   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/functional-531072/client.crt: no such file or directory
E0229 18:30:49.715824   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/functional-531072/client.crt: no such file or directory
multinode_test.go:382: (dbg) Done: out/minikube-linux-amd64 start -p multinode-051105 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (7m26.850508697s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051105 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (447.39s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (47.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-051105
multinode_test.go:480: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-051105-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-051105-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (70.755293ms)

                                                
                                                
-- stdout --
	* [multinode-051105-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18259
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18259-6428/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6428/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-051105-m02' is duplicated with machine name 'multinode-051105-m02' in profile 'multinode-051105'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-051105-m03 --driver=kvm2  --container-runtime=crio
E0229 18:32:43.785669   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/addons-848237/client.crt: no such file or directory
E0229 18:32:46.663133   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/functional-531072/client.crt: no such file or directory
multinode_test.go:488: (dbg) Done: out/minikube-linux-amd64 start -p multinode-051105-m03 --driver=kvm2  --container-runtime=crio: (45.881298119s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-051105
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-051105: exit status 80 (226.55355ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-051105
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-051105-m03 already exists in multinode-051105-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-051105-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (47.20s)

                                                
                                    
x
+
TestScheduledStopUnix (120.72s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-137313 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-137313 --memory=2048 --driver=kvm2  --container-runtime=crio: (49.013475596s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-137313 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-137313 -n scheduled-stop-137313
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-137313 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-137313 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-137313 -n scheduled-stop-137313
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-137313
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-137313 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-137313
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-137313: exit status 7 (74.37524ms)

                                                
                                                
-- stdout --
	scheduled-stop-137313
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-137313 -n scheduled-stop-137313
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-137313 -n scheduled-stop-137313: exit status 7 (73.644555ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-137313" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-137313
--- PASS: TestScheduledStopUnix (120.72s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (209.5s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.121328696 start -p running-upgrade-484999 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.121328696 start -p running-upgrade-484999 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m12.737787055s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-484999 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-484999 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m13.026115879s)
helpers_test.go:175: Cleaning up "running-upgrade-484999" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-484999
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-484999: (1.146596449s)
--- PASS: TestRunningBinaryUpgrade (209.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-475488 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-475488 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (88.237014ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-475488] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18259
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18259-6428/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6428/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (102.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-475488 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-475488 --driver=kvm2  --container-runtime=crio: (1m41.996351623s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-475488 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (102.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.59s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.59s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (152.43s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.868122215 start -p stopped-upgrade-945600 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0229 18:42:26.838530   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/addons-848237/client.crt: no such file or directory
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.868122215 start -p stopped-upgrade-945600 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m35.874254705s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.868122215 -p stopped-upgrade-945600 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.868122215 -p stopped-upgrade-945600 stop: (2.119473695s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-945600 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-945600 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (54.43732977s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (152.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (40.89s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-475488 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0229 18:42:43.785569   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/addons-848237/client.crt: no such file or directory
E0229 18:42:46.663156   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/functional-531072/client.crt: no such file or directory
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-475488 --no-kubernetes --driver=kvm2  --container-runtime=crio: (39.649878681s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-475488 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-475488 status -o json: exit status 2 (245.112206ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-475488","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-475488
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (40.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (32.54s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-475488 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-475488 --no-kubernetes --driver=kvm2  --container-runtime=crio: (32.539698192s)
--- PASS: TestNoKubernetes/serial/Start (32.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-475488 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-475488 "sudo systemctl is-active --quiet service kubelet": exit status 1 (211.289246ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (14.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (13.592154524s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (14.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-475488
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-475488: (1.194220822s)
--- PASS: TestNoKubernetes/serial/Stop (1.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (27.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-475488 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-475488 --driver=kvm2  --container-runtime=crio: (27.228481681s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (27.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-475488 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-475488 "sudo systemctl is-active --quiet service kubelet": exit status 1 (218.233294ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.08s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-945600
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-945600: (1.076706137s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-587185 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-587185 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (112.558254ms)

                                                
                                                
-- stdout --
	* [false-587185] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18259
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18259-6428/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6428/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 18:44:32.797757   41196 out.go:291] Setting OutFile to fd 1 ...
	I0229 18:44:32.797885   41196 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:44:32.797897   41196 out.go:304] Setting ErrFile to fd 2...
	I0229 18:44:32.797902   41196 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:44:32.798092   41196 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6428/.minikube/bin
	I0229 18:44:32.798652   41196 out.go:298] Setting JSON to false
	I0229 18:44:32.799568   41196 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5217,"bootTime":1709227056,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 18:44:32.799630   41196 start.go:139] virtualization: kvm guest
	I0229 18:44:32.801947   41196 out.go:177] * [false-587185] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 18:44:32.803707   41196 out.go:177]   - MINIKUBE_LOCATION=18259
	I0229 18:44:32.805313   41196 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 18:44:32.803749   41196 notify.go:220] Checking for updates...
	I0229 18:44:32.807722   41196 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18259-6428/kubeconfig
	I0229 18:44:32.808975   41196 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6428/.minikube
	I0229 18:44:32.810173   41196 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 18:44:32.811449   41196 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 18:44:32.813090   41196 config.go:182] Loaded profile config "force-systemd-env-588905": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 18:44:32.813207   41196 config.go:182] Loaded profile config "kubernetes-upgrade-541086": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0229 18:44:32.813315   41196 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 18:44:32.848038   41196 out.go:177] * Using the kvm2 driver based on user configuration
	I0229 18:44:32.849367   41196 start.go:299] selected driver: kvm2
	I0229 18:44:32.849379   41196 start.go:903] validating driver "kvm2" against <nil>
	I0229 18:44:32.849389   41196 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 18:44:32.851214   41196 out.go:177] 
	W0229 18:44:32.852409   41196 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0229 18:44:32.853672   41196 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-587185 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-587185

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-587185

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-587185

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-587185

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-587185

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-587185

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-587185

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-587185

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-587185

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-587185

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-587185"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-587185"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-587185"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-587185

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-587185"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-587185"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-587185" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-587185" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-587185" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-587185" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-587185" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-587185" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-587185" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-587185" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-587185"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-587185"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-587185"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-587185"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-587185"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-587185" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-587185" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-587185" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-587185"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-587185"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-587185"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-587185"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-587185"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-587185

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-587185"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-587185"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-587185"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-587185"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-587185"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-587185"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-587185"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-587185"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-587185"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-587185"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-587185"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-587185"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-587185"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-587185"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-587185"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-587185"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-587185"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-587185"

                                                
                                                
----------------------- debugLogs end: false-587185 [took: 3.122919584s] --------------------------------
helpers_test.go:175: Cleaning up "false-587185" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-587185
--- PASS: TestNetworkPlugins/group/false (3.39s)

                                                
                                    
x
+
TestPause/serial/Start (138.12s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-848791 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-848791 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (2m18.116411395s)
--- PASS: TestPause/serial/Start (138.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (116.66s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-247197 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-247197 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (1m56.6641498s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (116.66s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (101.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-991128 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-991128 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (1m41.469532139s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (101.47s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-247197 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ec662768-0583-4349-865f-30211e59a9e5] Pending
helpers_test.go:344: "busybox" [ec662768-0583-4349-865f-30211e59a9e5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ec662768-0583-4349-865f-30211e59a9e5] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004556577s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-247197 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (69.53s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-153528 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-153528 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (1m9.526238224s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (69.53s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-991128 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2fcf54b7-4abb-4625-bf5c-5d635a884bb0] Pending
helpers_test.go:344: "busybox" [2fcf54b7-4abb-4625-bf5c-5d635a884bb0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2fcf54b7-4abb-4625-bf5c-5d635a884bb0] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004357105s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-991128 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-247197 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-247197 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.172241818s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-247197 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-991128 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-991128 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.072909248s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-991128 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-153528 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [12d5f042-84d0-48ed-b979-ae5feff8b2a4] Pending
helpers_test.go:344: "busybox" [12d5f042-84d0-48ed-b979-ae5feff8b2a4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [12d5f042-84d0-48ed-b979-ae5feff8b2a4] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.00390598s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-153528 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-153528 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-153528 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.019010502s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-153528 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (968s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-247197 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-247197 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (16m7.725091105s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-247197 -n no-preload-247197
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (968.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (878.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-991128 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
E0229 18:52:43.785782   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/addons-848237/client.crt: no such file or directory
E0229 18:52:46.662885   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/functional-531072/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-991128 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (14m38.686219182s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-991128 -n embed-certs-991128
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (878.96s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-631080 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-631080 --alsologtostderr -v=3: (1.277874184s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-631080 -n old-k8s-version-631080
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-631080 -n old-k8s-version-631080: exit status 7 (74.544581ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-631080 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (859.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-153528 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
E0229 18:57:43.785230   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/addons-848237/client.crt: no such file or directory
E0229 18:57:46.663053   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/functional-531072/client.crt: no such file or directory
E0229 18:59:06.839148   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/addons-848237/client.crt: no such file or directory
E0229 19:02:43.785992   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/addons-848237/client.crt: no such file or directory
E0229 19:02:46.663943   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/functional-531072/client.crt: no such file or directory
E0229 19:04:09.717394   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/functional-531072/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-153528 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (14m18.774701226s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-153528 -n default-k8s-diff-port-153528
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (859.05s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (59.93s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-130594 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-130594 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (59.933026463s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (59.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (100.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-587185 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-587185 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m40.022763822s)
--- PASS: TestNetworkPlugins/group/auto/Start (100.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-130594 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-130594 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.129354473s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-130594 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-130594 --alsologtostderr -v=3: (10.117891386s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-130594 -n newest-cni-130594
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-130594 -n newest-cni-130594: exit status 7 (74.844413ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-130594 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (53.9s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-130594 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-130594 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (53.564211119s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-130594 -n newest-cni-130594
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (53.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (88.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-587185 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-587185 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m28.864978052s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (88.87s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-130594 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.93s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-130594 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-130594 -n newest-cni-130594
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-130594 -n newest-cni-130594: exit status 2 (289.979521ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-130594 -n newest-cni-130594
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-130594 -n newest-cni-130594: exit status 2 (273.170495ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-130594 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-130594 -n newest-cni-130594
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-130594 -n newest-cni-130594
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (98.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-587185 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-587185 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m38.586570697s)
--- PASS: TestNetworkPlugins/group/calico/Start (98.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-587185 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-587185 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-ffmjq" [be47d821-7b09-4f39-8183-9a31037819c6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-ffmjq" [be47d821-7b09-4f39-8183-9a31037819c6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.006343986s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-587185 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-587185 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-587185 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-x7txg" [f7242b61-315b-44c1-87c5-3b0e8fcb4d13] Running
E0229 19:19:45.195964   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/no-preload-247197/client.crt: no such file or directory
E0229 19:19:45.201253   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/no-preload-247197/client.crt: no such file or directory
E0229 19:19:45.211553   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/no-preload-247197/client.crt: no such file or directory
E0229 19:19:45.232104   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/no-preload-247197/client.crt: no such file or directory
E0229 19:19:45.272390   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/no-preload-247197/client.crt: no such file or directory
E0229 19:19:45.352720   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/no-preload-247197/client.crt: no such file or directory
E0229 19:19:45.513303   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/no-preload-247197/client.crt: no such file or directory
E0229 19:19:45.833828   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/no-preload-247197/client.crt: no such file or directory
E0229 19:19:46.474928   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/no-preload-247197/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005742292s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-587185 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-587185 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-mmn85" [79f52888-bf9c-449a-bb8b-42ed11af6d25] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0229 19:19:50.316939   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/no-preload-247197/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-mmn85" [79f52888-bf9c-449a-bb8b-42ed11af6d25] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.004027332s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (91.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-587185 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
E0229 19:19:55.438052   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/no-preload-247197/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-587185 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m31.845187709s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (91.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (91.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-587185 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-587185 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m31.944235999s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (91.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-587185 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-587185 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-587185 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (123.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-587185 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
E0229 19:20:26.159249   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/no-preload-247197/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-587185 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (2m3.402523234s)
--- PASS: TestNetworkPlugins/group/flannel/Start (123.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-h42rn" [2eb162a3-af2e-4cd6-acaa-d121609066b4] Running
E0229 19:20:49.718506   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/functional-531072/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.009728137s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-587185 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (14.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-587185 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-p9sv2" [42c92d16-4325-45fb-a2d9-a80b2013f002] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0229 19:20:56.678702   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/default-k8s-diff-port-153528/client.crt: no such file or directory
E0229 19:20:56.684425   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/default-k8s-diff-port-153528/client.crt: no such file or directory
E0229 19:20:56.694768   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/default-k8s-diff-port-153528/client.crt: no such file or directory
E0229 19:20:56.715066   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/default-k8s-diff-port-153528/client.crt: no such file or directory
E0229 19:20:56.755416   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/default-k8s-diff-port-153528/client.crt: no such file or directory
E0229 19:20:56.835754   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/default-k8s-diff-port-153528/client.crt: no such file or directory
E0229 19:20:56.996045   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/default-k8s-diff-port-153528/client.crt: no such file or directory
E0229 19:20:57.316292   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/default-k8s-diff-port-153528/client.crt: no such file or directory
E0229 19:20:57.957324   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/default-k8s-diff-port-153528/client.crt: no such file or directory
E0229 19:20:59.238164   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/default-k8s-diff-port-153528/client.crt: no such file or directory
E0229 19:21:01.798542   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/default-k8s-diff-port-153528/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-p9sv2" [42c92d16-4325-45fb-a2d9-a80b2013f002] Running
E0229 19:21:06.918874   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/default-k8s-diff-port-153528/client.crt: no such file or directory
E0229 19:21:07.120273   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/no-preload-247197/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 14.005043087s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (14.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-587185 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-587185 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-587185 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0229 19:21:09.767844   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/client.crt: no such file or directory
E0229 19:21:09.773129   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/client.crt: no such file or directory
E0229 19:21:09.783374   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/client.crt: no such file or directory
E0229 19:21:09.803646   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/client.crt: no such file or directory
E0229 19:21:09.843910   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-587185 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (14.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-587185 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-5p7d5" [ed63c02a-d59f-4964-843e-3a4665b88602] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-5p7d5" [ed63c02a-d59f-4964-843e-3a4665b88602] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 14.006762668s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (14.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (100.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-587185 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
E0229 19:21:30.249165   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-587185 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m40.822385074s)
--- PASS: TestNetworkPlugins/group/bridge/Start (100.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-587185 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-587185 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-rhm5t" [ee19907d-57c4-4bcf-bce3-2e64cf287a76] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-rhm5t" [ee19907d-57c4-4bcf-bce3-2e64cf287a76] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.005307021s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-587185 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-587185 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-587185 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0229 19:21:37.641136   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/default-k8s-diff-port-153528/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-587185 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-587185 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-587185 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-kpp7p" [a15fd0a8-0f92-4381-8123-3895cc5840a2] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004992456s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-587185 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-587185 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-rd6sk" [8e022413-4752-44bc-a6f9-b3d9dcd2fa62] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0229 19:22:29.040554   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/no-preload-247197/client.crt: no such file or directory
E0229 19:22:31.690882   13651 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/old-k8s-version-631080/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-rd6sk" [8e022413-4752-44bc-a6f9-b3d9dcd2fa62] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.004787992s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-587185 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-587185 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-587185 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-587185 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-587185 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-6b6z5" [bf9d7713-11d4-48af-bb08-2f4626642a15] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-6b6z5" [bf9d7713-11d4-48af-bb08-2f4626642a15] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004141335s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-587185 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-587185 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-587185 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    

Test skip (39/304)

Order skiped test Duration
5 TestDownloadOnly/v1.16.0/cached-images 0
6 TestDownloadOnly/v1.16.0/binaries 0
7 TestDownloadOnly/v1.16.0/kubectl 0
14 TestDownloadOnly/v1.28.4/cached-images 0
15 TestDownloadOnly/v1.28.4/binaries 0
16 TestDownloadOnly/v1.28.4/kubectl 0
23 TestDownloadOnly/v1.29.0-rc.2/cached-images 0
24 TestDownloadOnly/v1.29.0-rc.2/binaries 0
25 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
29 TestDownloadOnlyKic 0
43 TestAddons/parallel/Olm 0
56 TestDockerFlags 0
59 TestDockerEnvContainerd 0
61 TestHyperKitDriverInstallOrUpdate 0
62 TestHyperkitDriverSkipUpgrade 0
113 TestFunctional/parallel/DockerEnv 0
114 TestFunctional/parallel/PodmanEnv 0
130 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
131 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
132 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
133 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
134 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
135 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
136 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
137 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
162 TestGvisorAddon 0
163 TestImageBuild 0
196 TestKicCustomNetwork 0
197 TestKicExistingNetwork 0
198 TestKicCustomSubnet 0
199 TestKicStaticIP 0
231 TestChangeNoneUser 0
234 TestScheduledStopWindows 0
236 TestSkaffold 0
238 TestInsufficientStorage 0
242 TestMissingContainerUpgrade 0
257 TestNetworkPlugins/group/kubenet 3.44
266 TestNetworkPlugins/group/cilium 3.9
272 TestStartStop/group/disable-driver-mounts 0.16
x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-587185 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-587185

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-587185

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-587185

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-587185

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-587185

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-587185

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-587185

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-587185

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-587185

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-587185

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-587185"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-587185"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-587185"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-587185

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-587185"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-587185"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-587185" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-587185" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-587185" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-587185" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-587185" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-587185" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-587185" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-587185" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-587185"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-587185"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-587185"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-587185"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-587185"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-587185" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-587185" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-587185" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-587185"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-587185"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-587185"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-587185"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-587185"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18259-6428/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 29 Feb 2024 18:44:28 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.72.15:8443
name: stopped-upgrade-945600
contexts:
- context:
cluster: stopped-upgrade-945600
extensions:
- extension:
last-update: Thu, 29 Feb 2024 18:44:28 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: stopped-upgrade-945600
name: stopped-upgrade-945600
current-context: stopped-upgrade-945600
kind: Config
preferences: {}
users:
- name: stopped-upgrade-945600
user:
client-certificate: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/stopped-upgrade-945600/client.crt
client-key: /home/jenkins/minikube-integration/18259-6428/.minikube/profiles/stopped-upgrade-945600/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-587185

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-587185"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-587185"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-587185"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-587185"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-587185"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-587185"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-587185"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-587185"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-587185"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-587185"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-587185"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-587185"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-587185"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-587185"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-587185"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-587185"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-587185"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-587185"

                                                
                                                
----------------------- debugLogs end: kubenet-587185 [took: 3.28193091s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-587185" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-587185
--- SKIP: TestNetworkPlugins/group/kubenet (3.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-587185 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-587185

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-587185

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-587185

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-587185

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-587185

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-587185

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-587185

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-587185

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-587185

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-587185

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-587185"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-587185"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-587185"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-587185

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-587185"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-587185"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-587185" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-587185" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-587185" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-587185" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-587185" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-587185" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-587185" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-587185" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-587185"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-587185"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-587185"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-587185"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-587185"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-587185

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-587185

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-587185" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-587185" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-587185

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-587185

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-587185" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-587185" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-587185" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-587185" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-587185" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-587185"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-587185"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-587185"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-587185"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-587185"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-587185

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-587185"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-587185"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-587185"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-587185"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-587185"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-587185"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-587185"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-587185"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-587185"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-587185"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-587185"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-587185"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-587185"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-587185"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-587185"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-587185"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-587185"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-587185" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-587185"

                                                
                                                
----------------------- debugLogs end: cilium-587185 [took: 3.75585252s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-587185" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-587185
--- SKIP: TestNetworkPlugins/group/cilium (3.90s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-599421" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-599421
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
Copied to clipboard